I’m sitting here in my rocking chair, half dozing (it’s hard for me to stay awake these days) and I come across this passage from the Scientific American piece by Ibbotson and Tomasello (henceforth IT):
“And so the linking problem—which should be the central problem in applying universal grammar to language learning—has never been solved or even seriously confronted.”
Now I’m awake. To their credit, IT correctly identifies the central problem for generative approaches to language acquisition. The problem is this: if the innate structures that shape the ways languages can and cannot vary are highly abstract, then it stands to reason that it is hard to identify them in the sentences that serve as the input to language learners. Sentences are merely the products of the abstract recursive function that defines them, so how can one use the products to identify the function? As Steve Pinker noted in 1989 “syntactic representations are odorless, colorless and tasteless.” Abstractness comes with a cost and so we are obliged to say how the concrete relates to the abstract in a way that is transparent to learners.
And IT correctly notes that Pinker, in his beautifully argued 1984 book Language Learnability and Language Development, proposed one kind of solution to this problem. Pinker’s idea was based on the idea that there are systematic correspondences between syntactic representations and semantic representations. So, if learners could identify the meaning of an expression from the context of its use, then they could use these correspondences to infer the syntactic reprentations. But, of course, such inferences would only be possible if the syntax-semantics correspondences were antecedently known. So, for example, if a learner knew innately that objects were labeled by Noun Phrases, then hearing an expression (e.g., “the cat”) used to label an object (CAT) would license the inference that that expression was a Noun Phrase. The learner could then try to determine which part of that expression was the determiner and which part the noun. Moreover, having identified the formal properties of NPs, certain other inferences would be licensed for free. For example, it is possible to extract a wh-phrase out of the sentential complement of a verb, but not out of the sentential complement of a noun:
(1) a. Who did you [VP claim [S that Bill saw __]]?
b. * Who did you make [NP the claim [S that Bill saw __]]?
Again, if human children knew this property of extraction rules innately, then there would be no need to “figure out” (i.e., by general rules of categorization, analogy, etc) that such extractions were impossible. Instead, it would follow simply from identifying the formal properties that identified an expression as an NP, which would be possible given the innate correspondences between semantics and syntax. This is what I would call a very good idea.
Now, IT seems to think that Pinker’s project is widely considered to have failed . I’m not sure that is the case. It certainly took some bruises when Lila Gleitman and colleagues showed that in many cases, even adults can’t tell from a context what other people are likely to be talking about. And without that semantic seed, even a learner armed with Pinker’s innate correspondence rules wouldn’t be able to grow a grammar. But then again, maybe there are a few “epiphany contexts” where learners do know what the sentence is about and they use these to break into the grammar, as Lila Gleitman and John Trueswell have suggested in more recent work. But the correctness of Pinker’s proposals is not my main concern here. Rather, what concerns me is the 2nd part of the quotation above, the part that says the linking problem has not been seriously confronted since Pinker’s alleged failure . That’s just plain false.
Indeed, the problem has been addressed quite widely and with a variety of experimental and computational tools and across diverse languages. For example, Anne Christophe and her colleagues have demonstrated that infants are sensitive to the regular correlations between prosodic structure and syntactic structure and can use those correlations to build an initial parse that supports word recognition and syntactic categorization. Jean-Remy Hochmann, Ansgar Endress and Jacques Mehler demonstrated that infants use relative frequency as a cue to whether a novel word is likely to be a function word or a content word. William Snyder has demonstrated that children can use frequent constructions like verb-particle constructions as a cue to setting an abstract parameter that controls the syntax of a wide range of complex predicate constructions that may be harder to detect in the environment. Charles Yang has demonstrated that the frequency of unambiguous evidence in favor of a particular grammatical analysis predicts the age of acquisition of constructions exhibiting that analysis; and he built a computational model that predicts that effect. Elisa Sneed showed that children can use information structural cues to identify a novel determiner as definite or indefinite and in turn use that information to unlock the grammar of genericity. Misha Becker has argued that the relative frequency of animate and inanimate subjects provides a cue to whether a novel verb taking an infinitival complement is treated as a raising or control predicate, despite their identical surface word orders. In my work with Josh Viau, I showed that the relative frequency of animate and inanimate indirect objects provides a cue to whether a given ditransitive construction treats the goal as asymmetrically c-commanding the theme or vice versa, overcoming highly variable surface cues both within and across languages. Janet Fodor and William Sakas have built a large scale computational simulation of the parameter setting problem in order to illustrate how parameters could be set, making important predictions for how they are set. I could go on .
None of this work establishes the innateness of any piece of the correspondences. Rather it shows that it is possible to use the correlations across domains of grammar in order to make inferences on the basis of observable phenomena in one domain to the abstract representations of another. The Linking Problem is not solved, but there is a large number of very smart people working hard to chip away at it.
The work I am referring to is all easily accessible to all members of the field, having been published in the major journals of linguistics and cognitive science. I have sometimes been told, by exponents of the Usage Based approach and their empiricist cousins, that this literature is too technical, that, “you have to know so much to understand it.” But abbreviation and argot are inevitable in any science, and a responsible critic will simply have to tackle it. What we have in IT is an irresponsible cop out from those too lazy to get out of their armchairs.
 IT also thinks that something about the phenomenon of ergativity sank Pinker’s ship, but since Pinker spent considerable time in both his 1984 and 1989 books discussing that phenomenon, I think these concerns may be overstated.
 You can sign me up to fail like Pinker in a heartbeat.
 A reasonable review of some of this literature, if I do say so myself, can be found in Lidz and Gagliardi (2015) How Nature Meets Nurture: Statistical Learning and Universal Grammar. Annual Reviews of Linguistics 1. Also, the new Oxford Handbook of Developmental Linguistics (edited by Lidz, Snyder and Pater) is also full of interesting probes into the linking problem and other important concerns.