It’s been a bad few weeks to be a Chomskyan. It seems like everywhere you turn, someone is claiming that the core ideas of generative linguistics are relics of a bygone era, that the original lessons of the cognitive revolution can be discarded and that those of us who study the language faculty from the standpoint of classical cognitive science are standing around throwing hissy-fits while the real scientists repeatedly show us how the facts on the ground disprove every idea we ever had.
In Scientific American, Ibbotson and Tomasello (henceforth IT), here, take issue with two key features of the generative enterprise. First, IT tells us that languages do not have “an underlying computational structure”. Then IT tells us that there is no special biological foundation for language, no innate structure that determines how languages can and cannot be structured. IT tells us that “instead, grammar is the product of history (the processes that shape how languages are passed from one generation to the next) and human psychology (the set of social and cognitive capacities that allow generations to learn a language in the first place).”
As I inch along to my death alongside the ideas of generative linguistics, IT looks not like a revolutionary new idea, but the revival of some very old and discredited ideas with no answer to the fundamental questions that bankrupted them to begin with.
IT characterize their revolution thus: “They [children, JL] inherit the mental equivalent of a Swiss Army knife: a set of general-purpose tools—such as categorization, the reading of communicative intentions, and analogy making, with which children build grammatical categories and rules from the language they hear around them.”
The generative linguist is more than happy to grant the child these general purpose tools. Surely the child must be able to identify regularities in the environment, to read communicative intentions and to draw analogies. The central idea of Universal Grammar is that these things are not sufficient to explain the character of the languages we come to acquire. And the goal of the generative linguist is to add as little as possible to UG so that these abilities have increased potency. UG provides the dimensions along which analogies can be drawn and constrains the character of the representations that learners construct when faced with language data. It makes no claims about the trajectory of development, about the contributions of social cognition, memory architectures, cognitive control or statistical inference mechanisms. Rather it says, that given these endowments, we also need one more thing. Thus, any evidence that any nonlinguistic cognitive systems are involved in language acquisition are entirely silent about the existence of UG, unless it can be shown that they do work that we otherwise thought UG responsible for. Note also that the generativist would generally be delighted to learn that something they thought fell into their purview is better explained by something extralinguistic for it allows UG to be smaller, which everyone agrees is the strongest scientific position.
So, what kinds of things is UG responsible for? In an earlier post here, I worked through one example concerning the treatment of subjects and the interpretive asymmetries between sentences like these:
(1) a. Norbert knows how proud of himself Alexander was after the conference.
b. Norbert knows which picture of himself Alexander posted after the conference.
(1b) is ambiguous in a way that (1a) is not, a fact not exhibited in speech to children and not obviously explained by factors external to grammar.
Here is another relevant case, dating back to several papers from the 1970s by Chomsky and Bresnan:
(2) a. Valentine is a good value-ball player and Alexander is too
b. Valentine is a better value-ball player than Alexander is
In both of these examples, there is no pronounced predicate in the second clause, but we fill in this predicate in our minds as equivalent to the predicate in the first clause (i.e., a good value-ball player). Is this unpronounced predicate represented in the same way in the two sentences? Evidence suggests not. For example, in some contexts, they behave differently.
(3) a. Valentine is a good value-ball player and I think Alexander is too
b. Valentine is a better value-ball player than I think Alexander is
c. Valentine is a good value-ball player and I heard a rumor that Alexander is too
d. * Valentine is a better value-ball player than I heard a rumor that Alexander is
The fact to be explained here is why the child learner when building representations for (2) doesn’t treat the silent predicate in the same way in the two cases. Both can be interpreted as identical to the main clause predicate in (3a/b); however, this dependency can only hold across the expression “hear a rumor that…” in the coordinate (3c) but not the comparative (3d). It is an analogy that could be drawn but apparently isn’t. Moreover, it seems that (2b) has a structure analogous to the structure of interrogatives.
4) a. What do Valentine and Alexander like to play together?
b. What do you think that Valentine and Alexander like to play together?
c. * What did you hear a rumor that Valentine and Alexander like to play together?
In (4) there is a dependency between the wh-phrase “what” and the verb play. In (4b), we see that this dependency can be established across multiple clauses (just like the coordinate and comparative ellipses), and in (4c) we see that it cannot be established across “hear a rumor that” (like the comparative ellipsis and unlike the coordinate ellipsis).
Evidently the analogy that the child learner draws when acquiring English is that comparatives have the same kind of structure as wh-questions. Why do they draw this analogy and not the analogy between the comparative and the coordinate ellipsis, which shares more obvious surface features? These patterns, both the analogies that our grammars make and the ones that are tempting but not taken, have been at the center of the generative enterprise since the 1960s. They hold this privileged place because they invite grammar-internal explanations in the form of computational/representational mechanisms out of which sentences are built. To my knowledge, nobody in the field of usage-based linguistics has even attempted to show how such facts follow from “categorization, the reading of communicative intentions and analogy making.” Their silence suggests one of two things: (a) that their swiss-army knife doesn’t have the right tool, or (b) that they have dismissed such cases as irrelevant because they haven't seen how to integrate them with things they do understand. I actually think the answer is a combination of these two, a point I will elaborate on in a second post.
In the meantime, when the usage-based theorists have something to say about the range of grammatical phenomena, and the deep similarities found among widely diverse languages that animate discussion in generative syntax, we will be ready to engage. Until then, my friends and I will continue our long slow march to scientific obsolescence.