I have consistently criticized Everett’s Piraha based argument against Chomsky’s conception about Universal Grammar (UG) by noting that the conclusions only follow if one understands ‘universal’ in Greenberg rather than Chomsky terms (e.g. see here). I have recently discovered that this is correct as far as it goes, but it does not go far enough. I have just read this Everett post which indicates that my diagnosis was too hasty. There is a second part to the argument and the form is actually one of a dilemma: either you understand Chomsky’s claims about recursion as a design feature of UG in Greenbergian terms OR Chomsky’s position is effectively unfalsifiable (aka: vacuous). That’s the full argument. I (very) critically discuss it in what follows. The conclusion is that not only does it fail to understand the logic of a CU conception of universal, it also presupposes a rather shallow Empiricist conception of science, one in which theoretical postulates are only legitimate if directly reflected in surface diagnostics. Thus, Everett’s argument gains traction only if one mistakes Chomsky Universals (CU) for Greenberg Universals (GUs), misunderstands what kind of evidence is relevant for testing CUs and/or tacitly assumes that only GUs are theoretically legit conceptions in the context of linguistic research. In short, the argument still fails, even more completely than I thought.
Let’s give ourselves a little running room by reviewing some basic material. GUs are very different from CUs. How so?
GUs concern the surface distributional properties of the linguistic objects (LO) that are the outputs of Gs. They largely focus on the string properties of these LOs. Thus, one looks for GUs by, for example, looking at surface distributions cross linguistically. For example, one looks to see if languages are consistent in their directionality parameters (If ‘OP’ then ‘OV’). Or if there are patterns in the order in nominals of demonstratives, modifiers and numerals wrt the heads they modify (e.g. see here for some discussion).
CUs specify properties of the Faculty of Language (FL). FL is the name given to the mental machinery (whatever its fine structure) that outputs a grammar for L (GL) given Primary Linguistic Data from language L (PLDL). FL has two kinds of design features. The linguistically proprietary ones (which we now call UG principles) versus the domain general ones, which are part of FL but not specific to it. GGers investigate the properties of FL by, first, investigating the properties of language particular Gs and second, via the Poverty of the Stimulus argument (POS). POS aims to fix the properties of FL by seeing what is needed to fill the gap between information provided about the structure of Gs in the PLD and the actual properties that Gs have. FL has whatever structure is required to get the Language Acquisition Device (LAD) from PLDL to GL for any L. Why any L? Because any kid can acquire any G when confronted with the appropriate PLD.
Now on the face of it, GUs and CUs are very different kinds of things. GUs refer to the surface properties of G outputs. CUs refer to the properties of FL, which outputs Gs. CUs are ontologically more basic than GUs but GUs are less abstract than CUs and hence epistemologically more available.
Despite the difference between GUs and CUs, GGers sometimes use string properties of the outputs of Gs to infer properties of the Gs that generate these LOs. So, for example, in Syntactic Structures Chomsky argues that human Gs are not restricted to simple finite state grammars because of the existence of sentences that allow non-local dependencies of the sort seen in sentences of the form ‘If S1 then S2’. Examples like this are diagnostic of the fact that the recursive Gs native speakers can acquire must be more powerful than simple FSGs and therefore that FL cannot be limited to Gs with just FSG rules. Nonetheless, though GUs might be useful in telling you something about CUs, the two universals are conceptually very different, and only confusion arises when they are run together.
All of this is old hat, and I am sorry for boring you. However, it is worth being clear about this when considering the hot topic of the week, recursion, and what it means in the context of GUs and CUs. Let’s recall Everett’s dilemma. Here is part 1:
1. Chomsky claims that Merge is “a component of the faculty of language,” (i.e. that it is a Universal).
2. But if it is a universal then it should be part of the G of every language.
3. Piraha does not contain Merge.
4. Therefore Chomsky is wrong that Merge is a Universal.
This argument has quite a few weak spots. Let’s review them.
First, as regards the premises (1) and (2), the argument requires assuming that if something is part of FL, a CU, then it appears in every G that is a product of FL. For unless we assume this, it cannot be that the absence of Merge in Piraha is inconsistent with the conclusion that it is part of FL. But, the assumption that Merge is a CU does not imply that it is a GU. It simply implies that FL can construct Gs with that embody (recursive) Merge. Recall, CUs describe the capacities of the LAD not its Gish outputs. FL can have the capacity to construct Merge containing Gs even if it can also construct Gs that aren’t Merge containing Gs. Having the capacity to do something does not entail that the capacity is always (or even ever) used. This is why a claim like Everett’s that argues from (3), the absence of Merge in the G of Piraha, does not argue against Merge as part of FL.
Second, what is the evidence that Merge is not part of Piraha’s G? Everett points to the absence of “recursive structures” in Piraha LOs (2). What are recursive structures? I am not sure, but I can hazard a guess. But before I do so, let me note that recursion is not properly a predicate of structures but of rules. It refers to rules that can take their outputs as inputs. The recursive nature of Merge can be seen from the inductive definition in (5):
5. a. If a is a lexical item then a is a Syntactic Object (SO)
b. If a is an SO and b is an SO then Merge(a,b) is an SO
With (5) we can build bigger and bigger SOs, the recursive “trick” residing in the inductive step (5b). So rules can be recursive. Structures, however, not so much. Why? Well they don’t get bigger and bigger. They are what they are. However, GGers standardly illustrate the fact of recursion in an L by pointing to certain kinds of structures and these kinds have come to be fairly faithful diagnostics of a recursive operation underlying the illustrative structures. Here are two examples both of which have a phrase of type A embedded in another one of type A.
6. S within an S: e.g. John thinks that Bill left in which the sentence Bill left is contained within the larger sentence John thinks that Bill left.
7. A nominal within a nominal: e.g. John saw a picture of a picture where the nominal a picture is contained within the larger nominal a picture of a picture.
Everett follows convention and assumes that structures of this sort are diagnostic of recursive rules. We might call them reliable witnesses (RW) for recursive rules. Thus (6)/(7) are RWs for the claim that the rule for S/nominal “expansion” can apply repeatedly (without bound) to their outputs.
Let’s say this is right. It does not imply that the absence of RWs implies the absence of recursive rules. As it is often said: the absence of evidence is not evidence of absence. Merge may be applying even though we can find no RWs diagnostic of this fact in Piraha.
Moreover, Everett’s post notes this. As it says: “…the superficial appearance of lacking recursion does not mean that the language culd not be derived from a recursive process like Merge. And this is correct” (2-3). Yes it is. Merge is sufficient to generate the structures of Piraha. So, given this, how can we know that Piraha does not employ a Merge like operation?
So far as I can tell, the argument that it doesn’t is based on the assumption that unless one has RWs for some property X one cannot assume that X is a characteristic of G. So absent visible “recursive structures” we cannot assume that Merge obtains in the G that generates these structures. Why? Because Merge is capable of generating unboundedly big (long and deep) structures, and we have no RWs indicating that the rule is being recursively applied. But, and this I really don’t get; the fact that Merge could be used to generate “recursive structures” does not imply that in any given G it must so apply. So how exactly does the absence of RWs for recursive rule application in Piraha (note, I am here tentatively conceding that Everett’s factual claims might be right (which is likely incorrect, for they are likely wrong)) show that Merge is not part of a Piraha G? Maybe Piraha Gs can generate unboundedly large phrases but then applies some filters to the outputs to limit what surfaces overtly (this in fact appears to be Everett’s analysis). In this sort of scenario, the Merge rule is recursive and can generate unboundedly large SOs but the interfaces (to use minimalist jargon) filters these out preventing the generated structures from converging. On this scenario, Piraha Gs are like English or Braizilain Portuguese or … Gs, but for the filters.
Now, I am not saying that this is correct. I really don’t know and I leave the relevant discussions to those that do. But, it seems reasonable and if this is indeed what the right G analysis for Piraha is, then it too contains a recursive rule (aka, Merge) though because of the filters it does not generate RWs (i.e. the whole Piraha G does not output “recursive structures”).
Everett’s post rejects this kind of retort. Why? Because such “universals cannot be seen, except by the appropriate theoretician” (4). In other words, they are not surface visible, in contrast to GUs, which are. So, the claim is that unless you have a RW for a rule/operation/process you cannot postulate that rule/operation/process exists within that G. So, absence of positive evidence for (recursive) Merge within Piraha is evidence against (recursive) Merge being part of Piraha G. That’s the argument. The question is why anyone should accept this principle?
More exactly, why in the case of studying Gs should we assume that absence of evidence is evidence of absence, rather than, for example, evidence that more than Merge is involved is yielding the surface patterns attested. This is what we do in any other domain of inquiry. The fact that planes fly does not mean that we throw out gravity. The fact that balls stop rolling does not mean that we dumb inertia, the fact that there are complex living systems does not mean that entropy doesn’t exist. So why should the absence of RWs for recursive Merge in Piraha imply that Piraha does not contain Merge as an operation?
In fact, there is a good argument that it does. It is that many many other Gs have RWs for recursive Merge (a point that Everett accepts). So, why not assume that Piraha Gs do too? This is surely the simplest conclusion (viz. that Piraha Gs are just like other Gs fundamentally) if it is possible to make this assumption and still “capture” the data that Everett notes. The argument must be that this kind of reply is somehow illicit. What could license the conclusion that it is?
I can only think of one: that universals just are summaries of surface patterns. If so, then without surface patterns that are RWs for recursion in a given G means that there are no recursive rules in that G for there is nothing to “summarize.” All G generalizations must be surface “true.” The assumption is that it is scientifically illicit to postulate some operation/principle/process whose surface effects are “hidden” by other processes. The problem then with considering a theory according to which Merge applies in Piraha but its effects are blocked so that there are no RW-like “recursive structures” to reliably diagnose that it is there is that such an assumption is unscientific!
Note that this methodological principle applied in the real sciences would be considered laughable. Most of 19th century astronomy was dedicated to showing that gravitation regulates planetary motion despite the fact that planets do not appear to move in accord with the inverse square law. The assumption was made that some other mass was present and it was responsible for the deviant appearances. That’s how we discovered Neptune (see here for a great discussion). So unless one is a methodological dualist, there is little reason to accept Everett’s presupposed methodological principle.
It is worth noting that an Empiricist is likely to endorse this kind of methodological principle and be attracted to a Greenberg conception of universals. If universals are just summaries of surface patterns then absent the pattern there is no universal at play.
Importantly, adopting this principle runs against all of modern GG, not just the most recent minimalist bit. Scratch any linguist and s/he will note that you can learn a lot about language A by studying language B. In particular, modern comparative linguistics within GG assumes this as a basic operating principle. It is based on the idea that Gs are largely similar and that what is hard to see within the G of language A might be pretty easy to observe in that of language B. This, for example, is why we often conclude that Irish complementizer agreement tells us something about how WH movement operates in English, despite there being very little (no?) overt evidence for C to C movement in English. Everett’s arguments presuppose that all of this reasoning is fallacious. His is not merely an argument against Merge, but a broadside on virtually all of the cross-linguistic work within GG for the last 30+ years.
Thankfully, the argument is really bad. It either rests on a confusion between GUs and CUs or rests on bogus (dualist) methodological principles. Before ending, however, one more point.
Everett likes to say that a CU conception of universals is unfalsifiable. In particular, that a CU view of universals robs these universals of any “predictive power” (5). But this too is false. Let’s go back to Piraha.
Say you take recursion to be a property of FL then what would you conclude if you ran into speakers that spoke a language without RWs for that universal? You would conclude that they could learn a language where that universal has clear overt RWs. So, assume (again only for the sake of discussion) that you find a Piraha speaker sans recursive G. Assuming that recursion is part of FL you predict that such speakers could acquire Gs that are clearly recursive. In other words, you would predict that Piraha kids would acquire, to take an example at random, Brazilian Portuguese just like non-Piraha kids do. And, as we know, they do! So, taking recursion to be a property of FL makes a prediction about the kinds of Gs LADs can/do acquire. And these predictions seem to be correct. So, postulating CUs does have empirical consequences and it does make predictions, it’s just that it does not make predictions about whether CUs will be surface visible in every L (i.e. provide RWs in every L) and there is no good reason that they should.
Everett complains in this post that people reject his arguments because they confuse GUs and CUs and that this is incorrect (i.e. they don’t make this confusion). However, it is clear that there is lots of other confusion and lots of methodological dualism and lots of failure to recognize the kinds of “predictions” a CU based understanding of universals does make. Both prongs of the dilemma that the argument against CUs rests on collapse on pretty cursory inspection. There is no there there.
Let me end with one more observation, one I have made before. That recursion is part of FL is not reasonably debatable. What kind of recursion there is, and how it operates is very debatable. The fact is not debatable because it is easy to see its effects all around you. It’s what Chomsky called linguistic productivity (LP). LP, as Chomsky has noted repeatedly, requires that linguistic competence involve knowledge of a G with recursive rules. Moreover, that any child can acquire any language implies that every child comes equipped to the language acquisition task with the capacity to acquire a recursive G. This means that the capacity to acquire a recursive G (i.e. to have an operation like Merge) must be part of every human FL. This is a near truism and, as Chomsky (and many others, including moi) have endlessly repeated, it is not really contestable. But there is a lot that is contestable. What kind of rules/operations do Gs contain (e.g. FSGs, PSGs, TGs, MPs?)? Are these rules/operations linguistically proprietary (i.e. part of UG or not?)? How do Gs interact with other cognitive systems, etc.? These are all very hard and interesting empirical questions which are and should be vigorously debated (and believe me, they are). The real problem with Everett’s criticism is that it has wasted a lot of time by confusing the trivial issues with the substantive ones. That’s the real problem with the Piraha “debate.” It’s been a complete waste of time.
 By this I mean that whereas you are contingently a speaker of English and English is contingently SVO it is biologically necessary that you are equipped with an FL. So, Norbert is only accidentally a speaker of English (and so has a GEnglish) and it is only contingently the case that English is SVO (it could have been SOV as it once was). But it is biologically necessary that I have an FL. In this sense it is more basic.
 Actually, they are diagnostic on the assumption that they depict one instance of an unbounded number of sentences of the same type. If one only allows finite substitutions in the S positions then a more modest FSG can do the required work.
 Quote from 202 Science paper with Fitch and Hauser.
 I henceforth drop the bracketed modifier.
 Thanks to Alec Marantz for bringing this to my attention.
 This point has already been made By Nevins, Pesestsky and Rodrigues in their excellent paper.