Linguists tend to publish for other linguists. And this is fine. However, it was not always so. There was a time when linguists saw themselves as part of a larger cog-psy community and published in venues frequented by non-linguists. Cognition was a terrific venue for such work and it enabled linguistic discoveries to influence debates about the nature of mind (and, occasionally, even the brain). However, even in these golden years very few linguists published in the leading general science journals and this had the effect of segregating our work from the scientific mainstream. Books like Pinker’s The Language Instinct were effective conduits to the larger scientific community, but really nothing gains scientific street cred like publishing in the big three peer reviewed high impact journals like Science, Nature and PNAS. Moreover, as readers of FoL know, I believe that the single best way to politically advance linguistics and protect it economically is to disseminate our results to our fellow scientists. So, with this as prelude, I am delighted to note a paper that has yesterday appeared in PNAS of that ilk. The paper (here) has three authors: Chung-hye Han, Julien Musolino and Jeff Lidz (HML). Aside from being very GFL (i.e. “good for linguistics”) that such things are being published in PNAS it’s also a very good paper that I heartily recommend you take a look. It even has the virtue of being a mere 6 pages (a page limit we should encourage our own journals to try to approximate). So what’s in it? Here is a quick précis with some comments.
The paper argues that FL requires that speakers adopt (at least in the unmarked case) a single G when exposed to PLD. The reasoning for this conclusion is based on a novel Poverty of Stimulus (PoS) argument. What makes it novel is that the paper outlines how a particular kind of variation can be driven by properties of FL. Let me explain.
The standard PoS argument looks at invariances across Gs and shows how these can be accounted for with some or another proposed innate feature of FL. Variation among Gs is then attributed to the differential inductive effects of the Primary Linguistic Data (PLD). What HML shows is that this same logic allows FL to accommodate some G variation just in case the PLD is insufficient to fix a G parameter in the LAD (aka language acquisition device (i.e. kid)). In such cases, if FL requires an LAD to construct a single G for given PLD, then we expect to find variable Gs in a population of speakers with the following three key features: (i) Speakers exhibit variability wrt a certain set of (relatively recondite) grammatical phenomena, (ii) This variability is attested between but not within speakers, and (iii) The variability is independent between speakers and parents. Let me say a word about each point.
As regards (i), this is the fact HML discovers (actually this possibility is first described in an earlier 2007 HML paper). HML shows that in Korean the height of the verb explains scope of negation effects. These effects are quite obscure and verb height cannot be induced from the inspection of surface forms as they can be in languages like French and English. In effect HML shows argues (IMO, shows) that “children’s acquisition of this knowledge [viz. the scope facts, NH]…is not determined by any aspect of experience…because the experience of the language learner does not contain the necessary environmental trigger” (1).
As regards (ii), HML shows that the variation is consistent within speakers across different negative constructions and over time. In other words, once a LAD’s G fixes the position of a verb (and with it negation) it fixes it there consistently.
As regards (iii), the G variation in the population is effectively random. It is not possible to predict any speaker’s positioning of the verb by examining the Gs of parents (or, for that matter, anyone else). In other words, as there is no data that could fix where the verb sits in a Korean G then the fact that it gets fixed is a product of the structure of FL and so we expect random variation.
This is a very clever argument. Note, that it directly supports the logic of general PoS arguments without assuming G invariance of the output. Or, to put this another way: PoS arguments generally proceed from invariant properties of Gs to features of FL. HML notes that it is consistent with PoS logic that there be variation so long as it is random. The fact that one can find such cases further strengthens PoS logic.
The paper has two other virtues, IMO, more directly relevant to syntactic theory.
First, it provides a model of the kinds of things that syntacticians keep asking for. Syntacticians keep asking whether psycho-ling results can help choose between various alternative syntactic proposals. In principle the answer is, of course, yes. However, paradigms of this are hard to find. HML provides an example where a psycho-ling result could be close to dispositive. The relevant syntactic alternatives hail from the earliest days of Minimalism when V raising was a hot topic of inquiry.
Cast your mind back to the earliest days of the Minimalist Program (MP), indeed all the way back to 1995 and the Black Book. Chapters 2 and 3 presented alternative theories of V raising. The chapter 2 theory (see 138ff) basically provided a theory in which French (where Vs overtly raise to T) is the unmarked case and English (where V does not overtly raise) is the marked one. The markedness contrast is argued to be a reflection of a leading MP idea (viz. economy). The idea is that overt raising involves fewer operations than overt lowering plus LF raising so an English G is less economical than a French one as a matter of FL principle (not UG incidentally, but FL) for it uses more elaborate derivations in getting V to T.
The cost accounting changes in chapter 3 (roughly 195-198) where English “procrastinating” Gs are the unmarked case. The idea is that LF operations without PF effects are more economical than ones that result in PF “deletions” (an idea, btw, that lingers to the present in current MP accounts that take Gs to relate meanings with sound). This requires rethinking how syntactic morphemes are licensed (checking) and how they enter derivations (fully featurally encumbered), but given this and some other assumptions French overt raising in the chapter 3 theory is less grammatically svelte than covert V to T, and hence less preferred.
HML bears directly on these two theories and argues that they are both wrong. Were the chapter 2 story right then we would expect that all Korean Gs assigned Korean Vs high positions. Were the chapter 3 theory right they would all have low positions. The fact that both are available and equally so argues that neither option is better than the other. This leaves open the question of what theory allows both Gs to be equally available (see below). I suspect that a single cycle theory with copy deletion can be made to work, but who knows. Now that HML has shown that both are equally fine, we know that neither of the earlier stories can possibly be correct. Note that this does not mean that the HML account is inconsistent with any markedness view of V raising. This brings me to the second virtue of the HML story. It raises an interesting theoretical question.
So far as I know, there is currently no G story for why it is that LADs need choose a singly G for V raising. The data are quite clear that this is what happens, but why this is required is theoretically unclear. Thus, HML raises an interesting grammatical question: what is it about FL/UG that forces a choice? I can imagine some answers having to do with how complex the lexicon is and that having functional heads that optionally assign features to V in one of two ways is more costly than having one that does it only one way. This might have the desired effects if judiciously worked out. However, this then predicts that some Gs, mixed Gs, will be more costly than uniform Gs. This, in effect, makes English the marked case again given the fact that English Gs raise be and have (and maybe modals) but not more “lexical” verbs. At any rate, none of this is a theory, but the HML data raise an interesting theoretical question as well as closing off two reasonable prior alternatives. So it serves as a nice example of how psycho work can impact syntactic theory.
Let me end with one more point. Unlike much publicity concerning linguistics, the HML work offers an excellent example of what linguistics has achieved. This exploits real linguistic advances to make its scientifically interesting point. And this is in contrast to lousy ways of advertising our linguistic wares. One reaction to the invisibility of linguistics in the general scientific culture has been to try to co-opt anything “languagy” to promote linguistics. The word of the year competition at the LSA is an excellent (sad) example. The idea seems to be that this kind of thing garners media attention and that there is no such thing as bad publicity. I could not disagree more. The word of the year has nothing to do with linguistics, nothing to do with the serious advances GG has made, and relies on no expert/professional knowledge that linguistics brings to the scientific table. As such it does nothing to advertise our scientific bona fides. It’s, IMO, crap. And using it to advance the visibility of linguistics is both counterproductive and dishonest. I don’t know about you, but scientific overreach (aka, scientism) makes my teeth hurt. This should not be what a professional linguistics organization (the LSA) should be doing to promote linguistics. What should it be doing? Advertising work like HML, i.e. making this kind of work more widely accessible to the general scientific community. This is what I had hoped the LSA initiative (noted here) was going to do. To date, so far as I can tell, this hope has not been realized. Instead we get words of the year and worse (see here). It’s almost like the LSA is embarrassed by work in real linguistics. Too bad, for as HML indicates, it can sell well.
So, read the HML paper and advertise it to scientific colleagues outside linguistics proper. It is both interesting in itself and good publicity for what we do. It’s real linguistics with a broader reach.
 And this might predict that mixed Gs will necessarily only allow Vs robustly indicated in the data to be “different,” (e.g. raise). This is consistent with what we find in English where be and have are pretty PLD robust. It would be interesting to crank these cases through Charles Yang’s forthcoming learner and see what the limits on exceptionality would be for such a story.
 Thx to Alexander Williams for some co-venting about this.