Berwick, Friederici, Chomsky and Bolhuis (BFCB) have a newpaper that discusses Darwin’s Problem and Broca’s Problem (i.e. how brains embody language). The paper is a good short review of some of the relevant issues. I found parts very provocative and timely. Parts confusing. Here are some (personal) highlights with commentary.
1. BFCB review two facts that set boundary conditions on any evolutionary speculations; (i) “[h]uman language appears to be a recent evolutionary development” (roughly in the last 100,000 years citing Tattersall) and “the capacity for language has not evolved in any significant way since human ancestors left Africa” (roughly 50-80,000 years ago). In sum “that the human language faculty emerged suddenly in evolutionary time and has not evolved since. (p.1)” These two features suggest two conclusions.
First that UG emerged more or less fully formed and that whatever precipitated its emergence was something pretty simple. It was simple in two ways. It’s design structure was not the result of a lot of selective massaging and whatever triggered the change must have been pretty minimal, e.g. one mutation. I use ‘precipitate’ deliberately. The suggested picture is of a chemical reaction where the small addition of a single novel element results in a drastic qualitative change. For language the idea is that some small addition to the pre-existing cognitive apparatus results in the distillation of FL/UG.
I like this picture a lot. It is the one that Chomsky has presented several times in outlining target of minimalist speculation. If one assumes (as I do) that GB (or its very near cousins, viz. LFG, GPSG, HPSG, RG etc.), for example, roughly describes FL/UG then the project is to try to understand how something of this apparent complexity is actually quite simple. This will involve two separate but related projects: (i) eliminating the internal modularity of FL/UG as described by GB and (ii) showing that many of the operational constraints are actually reflections of more general cognitive/computational features of mammal minds.
I have discussed (ii) in various other posts (see here, here and here). As regards (i), Chomsky’s unification of Ross’s Islands via Subjacency and ‘Move alpha’ (see ‘On Wh Movement’), offers a good model of what to look for, though the minimalist unification envisioned here is far more ambitious as it involves unifying domains of grammar that Generative Grammar (GG) has taken to be very different from day one. For example, since the get-go GG has distinguished phrase structure rules from movement rules and both from construal rules. Unificationist ambitions (aka: theoretical hubris?) motivate trying to reduce these apparently distinct kinds of rules to a common core. You gentle readers will no doubt know of certain current suggestions of how to unify Phrase Structure and Movement rules as species of Merge (E and I respectively). There has also been a small (in my humble opinion, much too small!) industry aiming to unify movement and control (yours truly among others, efforts reviewed in Boeckx, Hornstein andNunes) and movement and binding (starting with Chomsky’s adoption of Lebeaux’s suggestion regarding reflexives in Knowledge of Language). From my seat in the peanut gallery, these attempts have been very suggestive and largely persuasive (I would think that wouldn’t I?), though there are still some puzzles to be tamed before victory is declared. At any rate, aside from unification being a general scientific virtue, the project gains further empirical motivation in the context of Darwin’s problem given the boundary conditions adumbrated in BFCB.
The second consequence is that the evolution of FL/UG has little to do with natural selection (NS). Why? If FL emerged 100,000 years ago and humanity started dispersing 80,000 years ago then this leaves a very short time for NS to work its (generally assumed) gradual magic. Note whatever took place must have happened entirely before the move out of Africa for otherwise we would expect group variation in FL/UG. If NS was the prime factor in the evolution of FL/UG why did it stop after a mere 20,000 years. Did NS only need 20,000 years to squeeze out all the possible variation? If so, there couldn’t have been much to begin with (i.e. the system that emerged was more or less fully formed). If not, then why do we see no variation in FL/UG across different groups of humans. Over the last 40,000 years we have encountered many isolated groups of people, with very distinctive customs living in very diverse and remote environments. Despite these manifest differences all humans share a common FL, as attested to by the fact that kids from any of these groups can learn the language of any other in essentially the same way (even the Piraha!). The absence of any perceptible group differences in FL/UG suggests that NS did not drive the change from pre-grammatical to grammatical or if it did so then there was very little variation to begin with.
2. BFCB provide an argument that communication is “ancillary to language design.” This relates to a previous post (here) where I discussed two competing evolutionary scenarios, one driven by communication, the other by enhanced cognition. As even a non-careful reader will have surmised, I am sympathetic to the second scenario. However, truth be told, I don’t understand the argument BFCB provide for the claim that communicative efficacy is only (at best) a secondary consideration for grammar design. The paper notes “the deletion of copies” which “make[s] sentence production easier renders sentence perception harder.” They conclude from this that deletion “follows the computational dictates of factor (iii)” (i.e. third factor concerns) over the “principle of communicative efficiency.” This, they continue, supports the conclusion “that externalization (a fortiori communication) is ancillary to language design.(4)”
Here’s what I don’t get: why is communicative efficiency measured by easing the burden on the receiver rather than on the sender? Why is ease of interpretation diagnostic of a communicative end but ease of expression is not and is taken instead to reflect third factor concerns? Inquiring minds would love an answer as this presumed asymmetry appears to license the conclusion that deletion is a third factor consequence and that mapping to AP is a late accretion. I don’t see the logic here.
Moreover, doesn’t this further imply that there is no deletion on the way to CI? And don’t we regularly assume that there are “LF” deletions (e.g. see Chomsky’s 1993 paper that launched the Minimalist Program). Why should there be “deletion” of copies at LF if deletion is simply a way of reducing the computational burden arising from having to express phonological material. I don’t get it. Help!
3. The paper has an important discussion of human lexicalization and what it means for Darwin’s problem. Human language has two distinctive features.
The first is the nature of the computational system, viz. it’s hierarchical recursion. I’ve discussed this elsewhere (here and here) so I will spare you more of the same.
The second concerns computational atoms, i.e. words. There are at least two amazing things about them. First, we have soooo many and they are learned soooo quickly! Words are “learned with amazing rapididty, one per waking hour at the peak period of language acquisition. (5)” I’ve discussed some of this before and Lila has chimed in with various correctives. However, as syntacticians like me tend to focus on grammar, rather than words, the large size and rapid speed of vocabulary acquisition bears constant repeating. Just like no other animal has anything like human grammatical structure, no other animal has anything quite like our lexicon, either quantitatively or qualitatively.
Let’s spend a second on these qualitative features. As BFCB note human lexical items “appear to be radically different from anything found in animal communication. (4)” In discussing work by Laura Petitto (one of Nym Chimpsky’s original handlers), BFCB highlight her conclusion that “chimps do not really have “names for things” at all. They only have a hodge-podge of loose associations,” in contrast with even the youngest children whose earliest words are “used in a kind-concept constrained way” (5).
In fact these lexical constraints are remarkably complex. Chomsky has repeatedly noted (e.g. starting with Reflections on Language and in almost every subsequent philo book since) that “[e]ven the simplest elements of the lexicon do not pick out (‘denote’) mind independent entities. Rather their regular use relies crucially on the complex ways in which humans interpret the world: in terms of such properties as psychic continuity, intention and goal, design and function, presumed cause and effect, Gestalt properties and so on” (5). This raises Plato’s problem in the domain of lexical acquisition and, given the vast noted difference between human lexical concepts and animal “words,” a strong version of Darwin’s problem as well.
It would be nice if we could tie the two distinctive features of human language (viz. unbounded hierarchical structure and vast and intricate vocabulary) together somehow. Boy would it be nice. I have a rough rule of scientific thumb: keep miracles to a minimum! We already need (at least) one for grammar, now it looks like we need a second for the human lexicon. Can these be related? Please?!
Here’s some idle speculation with the hope that wishing might make it so. First consider the complexity of lexicalized concepts. In the previous post on Darwin's Problem, I noted the H-VSK hypothesis that what grammar adds is the capacity to combine predicates from otherwise encapsulated modules together into single representations. I suggested that this fits well with the autonomy of syntax thesis, which is basically another way of describing the fact that grammatical operations, unlike module internal operations, are free to apply to predicates independently of their module specific “meanings.” Autonomy, in effect, allows distinct predicates to be brought together. The power to conjoin properties from different modules smells similar to what we find lexical items in human language doing, viz. they combine disparate features from various modules (natural physics, persons, natural biology etc.) to construct complex predicates. If so, the emergence of an abstract syntax may be a pre-condition for the formation of human lexical concepts, which are complex in that they combine features from various informationally encapsulated modules (Note: this conjecture has roots in earlier speculations of the Generative Semanticists).
Let’s now address the size of the lexicon and its speed of acquisition. Lila noted in her posts (see here and her paper here) that syntactic bootstrapping lies behind the explosion of lexical acquisition that we witness in kids. Until the syntax kicks in, lexical items are acquired slowly and laboriously. After it kicks in, we get an explosion in the growth of the lexicon. So, following H-VSK, grammar matters in forming complex predicates (aka lexical items) as it allows words to combine cross module features and, following Gleitman and colleagues, grammar underpins the explosive growth of the lexicon. If this is correct, then maybe the two miracles are connected, but please don’t ask me for the details. As I said, this is VERY speculative.
4. Broca’s problem gets a lot of airtime in BFCB. I am no expert in the matters discussed, but I confess to having been more skeptical than I expected about the reported results. Friends more knowledgeable than I am in these matters tell me that the reported results are extremely contentious and that very reasonable people strongly disagree with the specific reported claims. Here is a paper by Rogalsky and Hickok that reviews the evidence that Broca’s area shows specific sensitivity to syntactic structure. Their conclusion does not fit well with that in BFCB: “…our review leads us to conclude that there is no compelling evidence that there are sentence specific processing regions within Broca’s area” (p. 1664). Oh well.
To end: IMO, the most valuable part of BFCB, is how it frames Darwin’s problem in the domain of language. It correctly stresses that before addressing the evolution question we need to know what it is that we think evolved; the basic design of the system. FL has two parts: a hierarchical recursive syntax and a conceptually distinctive and very large lexicon. FL seems to have sprung up very rapidly. The Minimalist Program asks how this could have happened and has started to engage in the unification necessary to offer a reasonable conjecture. It’s a sign of a fecund research program that it renews itself by adding new questions and combining them with old results to make new conjectures and launch new research projects. As BFCB show, by this measure, Generative Grammar is alive and well.