Mark writes in comments to this recent post in reply to following remark of mine:
"Generative Grammar (and Postal IS a generative grammarian, indeed the discoverer of cross over phenomena) has made non trivial discoveries about how grammars are structured. [...] So, when asked about a result, there are tons to choose from, and we should make the other side confront these."
I find the talk about "we" and "other side" sociologically interesting, but ultimately unappealing. As Alex Clark notes, there's a couple different senses in which one might define the sides here; that's a first problem. Also, for me as someone from Europe, it feels a bit, how shall I put it, old-fashioned to think in terms of "sides". I think younger generations of scientists are way more heterodox and more attuned to the necessary plurality of the language sciences than you may realize, or than may be apparent from some of the regular commenters on this blog. This goes both ways: new generations are aware of important advances of the past decades, but also of the affordances of new data, methods, and theories for pushing linguistics forward.
This is why I quite like Charles Yang's and David Adger's recent work (just as I like Morten Christiansen's and Simon Kirby's work): it is interdisciplinary, uses multiple methods, tries to interface with broader issues and tries to contribute to a cumulative science of language. I don't think all the talk of "sides" and worries about people not acknowledging the value of certain discoveries contributes to that (hell, even Evans & Levinson acknowledge that broadly generativist approaches have made important contributions to our understanding of how grammars are structured).
For the same reason, I think the Hauser et al review ultimately misses the mark: it notes that under a very limited conception of language, the evolution of one narrowly defined property may still be mysterious. That's okay (who could be against isolating specific properties for detailed investigation), but the slippage from language-as-discrete-infinity to language-in-all-its-aspects is jarring and misleading. "The mystery of the evolution of discrete infinity" might have been a more accurate title. Although clearly that would draw a smaller readership.
I would like to take issue with Mark’s point here: in the current intellectual atmosphere there are sides and one chooses, whether one thinks one does or not. Moreover, IMO, there is a right side and a wrong one and even if one refrains from joining one “team” or another it is useful to know what the disagreement is about and not rush headlong into a kind of mindless ecumenism. As Fodor’s grandmother would tell you, open-mindedness leads to brain drafts which can lead to sever head colds and even brain death. What’s wrong with being open-minded? If you are tolerant of the wrong things, it makes it harder to see what the issues are and what the right research questions are. So, the issue is not whether to be open-minded but which things to be open-minded about and what things to simply ignore (Bayesians should like this position). Sadly deciding takes judgment. But, hey, this is why we are paid the big bucks. So, for the record, here is what I take my side to be.
In the white trunks sit the Generativist Grammarians (GGs). Chomsky has done a pretty good job of outlining the research program GGs have been interested in (for the record: I think that Chomsky has done a fantastic job of outlining the big questions and explaining why they are the ones to concentrate on. I have had more than a few disagreements about the details, though not about the overall architecture of the resulting accounts). It ranges from understanding the structures of natural language (NL) particular Gs to understanding the common properties of Gs (UG) to understanding what among these properties are linguistically specific (MP). This program now stretches over about 60 years. In that time GGs have made many many discoveries about the structure of particular NLs (e.g. Irish has complementizers that signal whether a wh has moved from its domain, English lowers affixes onto verbs while French raises verbs to affixes, Chinese leaves their whs in place while English moves them to the front of the clasue, etc.), about the structure of UG (e.g. there are structural restrictions on movement operations (e.g. Subjacency, ECP), anaphoric expressions come in various types subject to different structural licensing conditions (e.g. Binding Theory), phrase are headed (X’-theory)) and, I believe, though this is less secure than the other two claims, and about the linguistic specificity of UG operations and principles (e.g. Labeling operations are linguistically specific, feature checking is not). At any rate, GG has developed a body of doctrine with interesting (and elaborate) theoretical structure and rather extensive empirical support. This body of doctrine constitutes GG’s current empirical/scientific legacy. I would have thought (hoped really, I am too old to think this: how old are you Mark?) that this last point is not controversial. I use this body of doctrine to identify members of “my team.” They are the people who take these remarks as anodyne. They accept the simple evident fact that GGs have made multiple discoveries about NLs and their grammatical organization and are interested in understanding these more deeply. In other words, you want to play with me, you need to understand these results and build on them.
I hope it is clear that this is not an exotic demand. I assume that most people in physics, say, are not that interested in talking to people who still believe in perpetual motion machines, or who think the earth is flat, or who think the hunt for phlogiston to be an great adventure or who deny the periodic table or who reject atomism, or who deny the special theory of relativity, etc. Science advances conservatively, by building on results that earlier work has established and moving forward from these results. Science does this by taking the previous results as (more or less) accurate and looking for ways to deepen them. Deepening means deriving these as special cases of a more encompassing account. To wit: Einstein took Newton as more or less right. Indeed, his theory of gravitation derives Newtonian mechanics as a special case. Indeed had his theory not done so, we would have known that it was wrong. Why? Because of the overwhelming amount of evidence that Newtonian mechanics more or less accurately described what we saw around us. Now, Generativists have not built as impressive a scientific edifice as Newton’s. But it’s not chopped liver either. So, if you want to address me and mine, then the price of admission is some understanding about what GGs have found (see here for an earlier version of this position). My team thinks that GG has made significant discoveries about the structure of NL and UG. We are relatively open concerning the explanations for these results, but we are not open to the supposition that we have discovered nothing of significance (see here for a critique of a paper that thinks the contrary). Moreover, because my team believes this, when you address US you need to concern yourself with these results. You cannot act as if these do not exist or as if we know nothing about the structures of NLs.
Now, I hope that all of this sounds reasonable, even if written vigorously. The price of entry to my debates is a modest one: know something about what modern Generative Grammar has discovered. Much to my surprise much of the opposition to GG research seems not to have the faintest idea what it is about. So, though this demand it very modest, it is apparently beyond the capacity of many of those arguing the other side. Mort Christiansen has written drivel about language because he has refused to engage with what we know and have known for a long time about it (see here for a quick review of some stuff). Though I like Alex C (he seems a gentleman, though I don’t think I have ever met him face to face), it is pretty clear to me that he really does not think that Generative Grammar has discovered anything useful about NLs except maybe that they are mildly context sensitive. Islands, Binding Theory, ECP effects, these are barely visible on his radar and when asked how he would handle these effects in his approaches to learning, we get lots of discussion about the larger learning problem but virtually nothing about these particular cases that linguists have been working on and establishing lo (yes: lo!!) these many years. I take this to mean that he does not prize these results, or thinks them orthogonal to the problem of language learning he is working on. That’s fine. We need not all be working on the same issues. But I do not see why I should worry too much about his results until they address my concerns. Moreover, I see no reason to ignore this work until it makes contact with what I believe to be the facts of the matter. The same goes for the evolutionary discussions of the emergence of language. The work acts as if Generative Grammar never existed. (As Alex C recently helpfully noted in the comments section (here), the work seems to have a very “simple” idea of language: strings of length 2. Simple? Simple? Very gracious of you Alex). Indeed, rather than simply concede that evo work has had nothing to say about the kinds of questions Generativists have posed, they berate us for being obsessed with recursion. Imagine the analogue in another domain: someone asks how bee communication evolved and people are told to stop obsessing about the dance; Who cares how bees dance, we are interested in their communication!!
Generativists have discovered a lot about the structure of NLs (at least as much as von Frisch discovered about bee dancing) and if you want to explain how NLs arose in the species you must address these details or you are not talking about NLs. Not addressing these central properties (e.g. recursion) only makes sense if you believe that Generativists have really discovered nothing of significance. Well, excuse me if I don’t open my mind to consider this option. And, frankly, shame on you if you do.
People can work on whatever they want. But, if you are interested in NL then the place to begin is with what Generative research over the last 60 years has discovered. It’s not the last word (these don’t exist in the sciences), but it is a very good first-third words. Be as ecumenical as you want about tools and techniques (I am). But never ever be open-minded about whether what we’ve discovered are central features of NLs. That’s the modus operandi of flat earthers and climate science deniers. Sadly, many of those who work on language are similar in spirit (though there are a lot more of them when it comes to language than there are climate science deniers) in that they refuse to take seriously (or just are ignorant of) what Generativists have discovered.
Last point: Generativists are a legendarily argumentative lot. There is much disagreement among them (us). However, this does not mean that there has been no agreement on what GG has discovered. Ivan Sag and I did not agree on many things, but we could (and did) agree on quite a lot concerning what Generativists had discovered. My side lives under a pretty big tent. We (or at least I) are very catholic in our methods. We just refuse to disown our results.
Mark, I suggest you close your mind a little bit more. Bipartisanship sounds good (though for an excellent novelistic critique, read here), but it’s really a deadly cast of mind to have, especially when one of the two sides basically refuses to acknowledge how much we have learned over 60 years of research.