Monday, April 8, 2013

Give Nim, Give Me



(Photo credit: Herb Terrace)

Einstein was a very late talker. “The soup is too hot”, as the legend has it, were his first words at the very ripe age of three. The boy genius hadn't seen anything worth commenting on.

The credulity of such tales aside, they do contain a kernel of truth: a child doesn’t have to say something, anything, just because he can. This poses a challenge for the study of child language, since naturalistic production is often the only, and certainly the most accessible, data at hand. A child’s linguistic knowledge may not be fully reflected in their speech, which we have known since Lila’s deconstruction of the telegraphic stage. Some expressions may not show up because we haven’t waited long enough, while others—an extraction violation, for instance—will never be said for they are unsayable.

In recent years, what-you-say-is-what-you-know appears to be gaining popularity, as interests in usage based theories of language are on the rise. Here is a warmup. The expression "give me" is proposed as a frozen phrase (Lieven et al. 1992, Tomasello 2010), rather than syntactically composed, spawning cottage industries such as "formulaic languages", which some regard as a transient stage in language evolution (Wray 1998). True, "give" and "me" make a good tag team: "give me freedom", "give me cheese", "give me now" … “gimme coffee” (an old favorite of mine), and they dwarf other combinations.  Take the speech of Adam, Eve and Sarah from Roger Brown's classic study: the frequencies of "give me", "give him", and "give her" are:

95 (93 give me, 2 gimme): 15 (give him): 12 (give her), or 7.91 : 1.23 : 1

So “give me” does seem especially formulaic ... right? Well, not if you check the frequencies of "me", "him", and "her" from the same three kids:

2870 (me) : 466 (him) : 364 (her), or 7.88 : 1.28 : 1

Nothing much can be concluded from these six numbers but there seems to be pretty good support for the null hypothesis that “give” and pronouns combine completely independently. The Brown data has been around for forty years; it's just nobody had bothered to check. (Use the grep, Luke.)

Nowadays everyone does statistics but we still need reasonable hypotheses to test for and against. Usage based theories have plenty of p-values: one can easily show that the frequencies of "give me/him/her" are statistically significantly different from "chance"--but what is "chance"? If we know anything about the statistics of language, it is that language is not "chance" (Zipf 1949). To make the argument against grammar, one would need to show, at the minimum, that the observed distribution in child language is statistically inconsistent with the predicted distribution of a grammar.  Judiciously chosen null hypotheses are needed, not gut feelings: so long to the "gimme" myth.

A few years ago, Virginia Valian came to Penn to give a talk. It concerned the distribution of determiner-noun combinations in child English. Virginia was the first to show that English children’s determiners are virtually error free (1986), thereby providing evidence for an abstract grammar. Not so quick, the usage-based folks say, because the absence of errors could be the result of children memorizing specific word combinations from adult speech, which would also be error free. (I fully endorse such skepticism.) We need some other statistical benchmarks to show the presence of grammar. 

Diversity is a popular measurement. Suppose there is a rule DP→DN, where D is either a or the, and N stands for a singular noun, yielding "a/the car", "the/a pizza", etc. Shouldn't the interchangeability of "a" and "the", per grammar, be reflected in the diversity of nouns that appear with both of them?  Young children's determiner use, however, only shows 20-40% of diversity (Pine & Lieven 1996); perhaps they just memorize determiner-noun combinations from the adult input (Tomasello 2000, Cognition). 

Along with Stephanie Solt and John Stewart, Virginia showed that mothers' speech contains comparable, and comparably low, diversity measures as their toddlers’ (2009, J. Child Language). After her talk, I pulled out some numbers from the Brown Corpus: not Roger Brown, but the collection of English print materials at Brown University, the grandmother of all modern linguistic corpora. Only 25% of singular nouns that combine with either "a" and "the" combine with both. That's lower than some child samples from Pine & Lieven (1996), so two year olds have a better command of English grammar than professional writers. Now that is absurd. 

One reaction would be to abandon the premise that syntactic diversity is a direct reflection of grammatical complexity. Not a bad idea, and much of the purported evidence for usage based theory vanishes. Another reaction would be to go for the extra credit, by characterizing the statistical profile of syntactic diversity that can be expected from a grammar. If the child used 100 distinct nouns, and paired them with either "a" or "the" 500 times, how many of the 100 will be paired with both, assuming the rule DP→DN is at work? Virginia's work was inspirational. I was also knee deep in Zipfian waters, thanks to the work of Erwin Chan, Constantine Lignos and my colleague Mitch Marcus. They showed that pretty much everywhere you look--words, lemmas, morphological inflections, syntactic rules--language follows Zipf-like distributions, which can be exploited for fun and benefit.

If a sample contains 100 nouns (types), then a good many of them must occur only once since they will inevitably fall on Zipf's long and flat tail: these fellows will never get to meet both determiners.  Even for those that do show up multiple times, they may still be monogamous, just as when you toss a fair coin 3 times, it may land on heads 3 times in a roll. And grammar is no fair coin. Nouns tend to have a favored determiner, even though both combinations are possible. For instance, "the bathroom" is more commonly used than "a bathroom" but we say "a bath" a lot more often than "the bath". These imbalances are probably not a matter of grammar, which presumably does not encode the frequency of bodily needs, but they will conspire to produce low syntactic diversity, and thus the impression of grammatical absence.

After a bit of probability theory exercise [1], we can use a formula to calculate the expected diversity from the sample and vocabulary size (e.g., 500 and 100). The key here is multiplication--the statistical hallmark of independence--of the noun probabilities with determiner-noun combination probabilities, both of which can be well approximated by Zipf’s law. I was surprised to see how well it worked, and in fact had to learn new statistics just to be sure. We mostly use statistics to show one set of values and another (e.g., experimental results vs. "chance") are statistically different, but being different is not the same as being the same. Lin's concordance correlation coefficient (there is an R package, of course), first invented in biostatistics to verify drug effectiveness across trials, confirmed the observation.   In other words, children's syntactic diversity appears exactly what one might expect from a grammar rule, once the general statistical properties of language are taken into account. [2]

Someday we may have a bunch of these statistical profilers, like what evolutionary geneticists use to detect natural selection at the molecular level. Let me make very clear what this work does and does not show. It does show that at least one part of child language makes use of an abstract rule of grammar but it does not mean that all parts of child language do. It does show that children can merge but it does not tell us how they learn what to merge with. It does show the presence of grammatical ability in very young children, but it does not say how that ability got there in the first place, ontogetically or phylogenetically.

Which brings me to Nim Chimpsky and the evolution of language. The continuity between primate language and early child language is believed to hold “the most promising guide to what happened in language evolution” (Hurford 2011, p590), presumably on the apparent formulaic similarities between them. If the numbers worked out for children, who seem to have a grammar after all, perhaps Nim is due for a similar upgrade?  Whatever one thinks of Project Nim--I had to fight back tears--it produced the only publicly available corpus of primate language. Nim acquired about 125 signs of ASL, and produced thousands of multiple sign combinations, the vast majority of which being two sign combinations (Terrace 1979, Nim) These have been described as rule-like constructions, each consisting of two closed class functors such as “give” and “more,” along with open class items such as “apple,” “Nim,” or “eat.”  Signs do not combine with uniform frequency either, with “eat”, “banana”, “me”, “Nim" etc. among the predictable favorites. What’s Nim’s syntactic diversity if he combined signs under a rule? Run the numbers: the poor guy didn’t seem to have a grammar, just as his trainers concluded (Terrace et al. 1979).

Syntactic diversity in human language, usage based learning and Nim Chimpsky.



Moral for the day: the null hypothesis, once properly formulated, may come back to bite your statistical hand.  All very exciting. When I explained this work to some of my non-linguist friends (I do have a few!), their reaction was one of surprise, though not the kind I had in mind. “Why would anyone think kids learn language by copying us?  Just this morning, Maggie said ___”, to be filled by one of the darndest things kids say.  They do wonder about vocabulary, boys vs. girls, and bilingualism, but no one is remotely concerned about the combinatorics of grammar that are, literally, screaming in their faces. Perhaps linguists do worry too much. 

[1] Thanks to Ruochuan Liu and Qiuye Zhao for spotting an error early on.
[2] Could a usage cum memory-retrieval model account for the same finding? I don’t think one knows for sure,  since it has been difficult to pin down the mechanics of usage based learning so it’s unclear what quantitative predictions it makes. I won’t dwell on the matter here but refer you to the paper under discussion, where a concrete proposal (Tamales 2000, Cognitive Linguistics, p77) is tested but came up short. 

No comments:

Post a Comment