“Nothing talks like humans do. Nothing even comes close. This sets up an interesting evolutionary problem: how did this unique capacity arise in the species? Unfortunately, approaching this question intelligently requires combining skills that seldom travel in tandem. Linguists know a lot about the principal features of human language but little about how evolution works, and biologists know a lot about how evolution works but little about the distinctive properties of human language. Enter Berwick and Chomsky’s marvelous little book. In a mere four lucid and easily accessible chapters they educate linguists about the central mechanisms driving evolution and bring biologists up to date on the key distinctive features of natural language. Anyone interested in this topic must read this book.”
Nor am I the only open who enjoyed it. Others more educated than me loved it too; Ian Tatersall, Martin Nowak and Stephen Crain. Tatersall. So give yourself a post new year's gift and buy and read the book.
Any idea when this will actually appear? According to the MIT Press website the offline version appeared in December, but I can't find it anywhere, at least in Canada.ReplyDelete
Dunno. I got mine from MIT press.Delete
yeah, I ordered it from MIT Press and it appeared in my mailbox last week. So it definitely already exists!Delete
It was Canada lagging behind, as I should've suspected right away...Delete
A lecture by Bob Berwick on the (topic of the) bookReplyDelete
I assumed this book would just be a re-hash of the fairly tiresome, hyper-sceptical ‘mystery of language evolution’ perspective the authors usually adopt. And it is in some respects. But it also includes a surprisingly decent discussion of recent literature on animal cognition. Berwick’s influence is clearly strong, i.e. less rhetorical posturing, more engagement and literature reviewing. But both authors only brush over their core question of how hierarchy is actually established, pointing languidly to ‘some algorithm’ responsible for labeling (p. 10). It should be stressed I think that even Chomsky’s more recent technical work doesn’t go far beyond this ‘some algorithm’ attitude (2013, 2015). From the perspective of brain dynamics, ‘some algorithm’ becomes capable of being explored in a number of interesting ways, as I mention here (http://journal.frontiersin.org/article/10.3389/fpsyg.2015.01515/abstract) and here (http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00715/abstract) and in upcoming papers (see also Boeckx and Theofanopolou’s useful response to the latter paper: http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00784/full).ReplyDelete
Berwick's influence is clearly strong, generating less rhetorical posturing and more critical engagement and literature reviewing. Grand Chomskyan sentences like ‘Rational inquiry into the evolution of some system evidently proceed only as far as its nature is understood’ and ‘These truisms hold of the study of the human language faculty just as for other biological systems’ are nicely supplemented by a fair amount of detail and actual proposals and directions. It’s a bit surreal having these authoritative Chomskyan phrases right next to pithy Berwickean jokes, though. But this isn’t a novel, it’s a pop-sci book, so it works well enough.
The authors hypothesize about Neanderthal braincase size, for some reason completely ignoring Boeckx and Benitez-Burraco’s important ‘globularity’ proposal (http://journal.frontiersin.org/article/10.3389/fpsyg.2014.00282/abstract), worth taking seriously. To top it off, there are quite a few typos and incorrect citations (date, ordering etc). Chomsky even incorrectly cites POP as 2012, despite being published online on January 6th 2013, and widely cited as a 2013 paper.
Berwick’s (I assume the influence is his, given Chomsky’s typically evasive responses to neuro-related Q&A sessions, i.e. ‘I will answer your question about X by briefly mentioning X-related topics before slowly beginning a monologue about Y’) approach to neurolinguistics is also hopelessly outdated. He sticks purely to localisation issues, answering the ‘How’ question of language evolution by just pointing to good old BA44 and 45. This ignores how the brain actually *functions* (via oscillations and their numerous coupling operations: phase-amplitude, phase-phase, etc) and keeps solely to the basic understanding we have of where in the brain complex dynamic operations centre around. They even ‘(speculatively) posit that the word-like elements, or at least their features as used by Merge, are somehow stored in the middle temporal cortex as the “lexicon”’ (p. 159). This is really quite breath-taking in its simplicity, and ignores well-accepted ‘truisms’ in cognitive neuroscience that conceptual representations are widely distributed in many other regions, even if the middle temporal cortex acts as a crucial memory buffer in phrase structure building (just as how Broca’s area is most likely a similar kind of buffer in syntactic computation, and not the ‘seat of syntax’ as Friederici has often claimed). I can guarantee that if I had put my hand up in graduate neurolinguistics classes and told everyone that the lexicon is in the middle temporal cortex, I would have been laughed out of the room.
In brief, the book is a useful guide to minimalist approaches to language evolution, and I’d most likely recommend it to an outsider. But it ignores far too much of the genetics, neurobiology, and neuropsychological literature to be taken very seriously.
I've elaborated on these issues here: https://elliotmurphyblog.wordpress.com/2016/01/16/review-why-only-us-by-berwick-and-chomsky/ReplyDelete
“To externalize the internally generated expression what John is eating what, it would be necessary to pronounce what twice, and that turns out to place a very considerable burden on computation, when we consider expressions of normal complexity and the actual nature of displacement by Internal Merge. With all but one of the occurrences of what suppressed, the computational burden is greatly eased. The one occurrence that is pronounced is the most prominent one, the last one created by Internal Merge: otherwise there will be no indication that the operation has applied to yield the correct interpretation. It appears, then, that the language faculty recruits a general principle of computational efficiency for the process of externalization”ReplyDelete
Does anyone have a reference to a discussion of what this 'considerable burden' is, given the existence of copy pronunciation of wh-words in C position in some languages (but not the base position in addition), and also Greg Kobele's discussion of copying in Yoruba etc.
I think his point is that if you take a complex ah-phrase, you need to run the phonological computation multiple times on it. Scope-marking constructions tend to only involve monolexemic items, so you don't get `which nice statues of Mary that Fred bought did you think which nice statues of Mary that Fred bought Sue sold which nice statues of Mary that Fred bought'.ReplyDelete
Of course that might also be 'functional' in that it would be nice to get your sentence out of your mouth in a timely manner, before the tiger gets too close or the deer gets away or whatever. So I find that bit of discussion extremely unsatisfactory, due to insufficient development of details.Delete
I'm also having a very hard time following this line of argument, it seems to rely on a lot of implicit assumptions that are not spelled out and also it doesn't really take into account what role shared knowledge and pragmatic expectations plays in actual interaction (ironically ^^).Delete
I've read a bit, but not all of the book. This passage in particular perplexed me:ReplyDelete
“There is, then, a conflict between computational efficiency and interpretive-communicative efficiency. Universally, languages resolve the conflict in favor of computational efficiency. These facts at once suggest that language evolved as an instrument of internal thought, with externalization a secondary process"
What is the support that this is universally true? And is there anyone who could elaborate on what "interpretive-communicative efficiency" means in greater detail, is it "produce all the copies"?
So I don't really buy this particular proposal for lack of copy pronunciation, but the argument is meant to go as follows: (i) If you have copies it's the same element in two structurally distinct positions; (ii) each structural position is in principle able to be spelled out; (iii) spelling out requires some computation (figure out phonological rules, allomorphs etc); (iv) even spelling out a trace as a pronoun, say, requires some computation (though less than spelling out the whole copy; (v) for the purposes of communication, it would be good to spell out both the scope position of a wh-phrase and its theta-position, as both are crucial for semantic interpretation; (vi) assuming a movement derivation (so putting aside building the dependency in some other way, say by Agree, or by resumption), it would be better for communication ("interpretive communicative efficiency") to pronounce both bits of the chain so both scope and theta position can be identified; (vii) in addition, empty elements raise processing issues for parsing (you need to find the gap); (viii) so rather than a system that seems to maximise communicative efficiency here, it seems to minimise computation, to the detriment of both parsing and hence communication (where's the gap? damn, I can't find the gap!). (ix) note, @Avery, that Chomsky's point is not to rule out all functional explanations, but to rule out ones based on communicative efficiency (so one could take minimising computation as motivated by your tiger scenario, though the causal evolutionary story here seems, well, a little weak!). (x) the extension to `universally true' is that other cases in syntax (islands, ECP, etc) seem more amenable to computational as opposed to communicative explanations, which I think it probably right, though `universal' is presumably taken to be implicitly restricted by `given what we know about syntax'.Delete
So the computational story strikes me as odd becauseDelete
a) we have the wh-copy languages where copies of the wh word appear in intervening C positions but not the original one.
b) we have the resumptive pronoun languages where a pronoun appears in the original position, sometimes even in questions
So contrary to what they say a bit later, computational efficiency (construed as you suggest it is to be construed here) does not always win, but there remain an interesting typological question of why wh words never seem to be left in situ. This applies to both questions and relative clauses, where resumptive pronouns are extremely common, but in situ specially marked pronouns apparently nonexistent (Keenan noted their absence in his 1985 chapter on RC's in the Shopen Typology volume, and nobody has crawled out of the woodwork brandishing an example so far).
So I think there is a real puzzle there, but not one that has anything to do with efficiency or communication (and one that the opponents of UG seem to want to ignore). The generic problem with explanations in terms of 'communicative efficiency', it seems to me, is that they are pretty flimsy unless backed by a serious analysis of communication including the role of the shared environment, non- or semi- linguistic addons such as gesture, etc, which is often lacking.
There's an interesting review of "Why only us" here.ReplyDelete
I disagree. Hauser's 'review' is not interesting, it's just an extended endorsement from one close co-author to another. I'm genuinely amazed that anyone is taking his slew of magnanimous adjectives seriously. It's true that he makes a few minor objections, but some of the more serious problems with the book are addressed here: http://www.biolinguistics.eu/index.php/biolinguistics/article/view/415/361Delete