It is somewhat surprising that Harper’s felt the need to run a hit piece by Tom Wolfe on Chomsky in its August issue (here). True, such stuff sells well. But given that there are more than enough engaging antics to focus on in Cleveland and Philadelphia one might have thought that they would save the Chomsky bashing for a slow news period. It is a testimony to Chomsky’s stature that there is a publisher of a mainstream magazine who concludes that even two national conventions featuring two of the most unpopular people ever to run for the presidency won’t attract more eyeballs than yet another takedown of Noam Chomsky and Generative Grammar (GG).
Not surprisingly, content wise there is nothing new here. It is a version of the old litany. Its only distinction is the over the top nuttiness of the writing (which, to be honest, has a certain charm in its deep dishonesty and nastiness) and its complete disregard for intellectual integrity. And, a whiff of something truly disgusting that I will get to at the very end. I have gone over the “serious” issues that the piece broaches before in discussions of analogous hit jobs in the New Yorker, the Chronicle of Higher Education, and Aeon (see here and here for example). Indeed, this blog was started as a response to what this piece is a perfect example of: the failure of people who criticize Chomsky and GG to understand even the basics of the views they are purportedly criticizing.
Here’s the nub of my earlier observations: Critics like Everett (among others, though he is the new paladin for the discontented and features prominently in this Wolfe piece too) are not engaged in a real debate for the simple reason that they are not addressing positions that anyone holds or has ever held. This point has been made repeatedly (incuding by me), but clearly to no avail. The present piece by Wolfe continues in this grand tradition. Here's what I've concluded: pointing out that neither Chomsky nor GG has ever held the positions being “refuted” is considered impolite. The view seems to be that Chomsky has been rude, sneaky even, for articulating views against which the deadly criticisms are logically refractory. Indeed, the critics refusal to address Chomsky’s actual views suggests that they think that discussing his stated positions would only encourage him in his naughty ways. If Chomsky does not hold the positions being criticized then he is clearly to blame for these are the positions that his critics want him to hold so that they can pummel him for holding them. Thus, it is plain sneaky of him to not hold them and in failing to hold them Chomsky clearly shows what a shifty, sneaky, albeit clever, SOB he really is because any moderately polite person would hold the views that Chomsky’s critics can demonstrate to be false! Given this, it is clearly best to ignore what Chomsky actually says for this would simply encourage him in articulating the views he in fact holds, and nobody would want that. For concreteness, let’s once again review what the Chomsky/GG position actually is regarding recursion and Universal Grammar (UG).
The Wolfe piece in Harper’s is based on Everett’s critique of Chomsky’s view that recursion is a central feature of natural language. As you are all aware, Everett believes that he has discovered a language (Piraha) whose G does not recurse (in particular, that forbids clauses to be embedded within clauses). Everett takes the putative absence of recursion within Piraha to rebut Chomsky’s view that recursion is a central feature of human natural language precisely because he believes that it is absent from Piraha Gs. Everett further takes this purported absence as evidence against the GG conception of UG and the idea that humans come with a native born linguistic facility to acquire Gs. For Everett human linguistic facility is due to culture, not biology (though why he thinks that these are opposed to one another is quite unclear). All of these Everett tropes are repeated in the Wolf piece, and if repetition were capable of improving the logical relevance of non-sequiturs, then the Wolfe piece would have been a valuable addition to the discussion.
How does the Everett/Wolfe “critique” miss the mark? Well, the Chomsky-GG view of recursion as a feature of UG does not imply that every human G is recursive. And thinking that it does is to confuse Chomsky Universals (CU) with Greenberg Universals (GU). I have discussed this before in many many posts (type in ‘Chomsky Universals’ or ‘Greenberg Universals’ in the search box and read the hits). The main point is that for Chomsky/GG a universal is a design feature of the Faculty of Language (FL) while for Greenberg it is a feature of particular Gs. The claim that recursion is a CU is to say that humans endowed with an FL construct recursive Gs when presented with the appropriate PLD. It makes no claim as to whether particular Gs of particular native speakers will allow sentences to licitly embed within sentences. If this is so, then Everett’s putative claim that Piraha Gs do not allow sentential recursion has no immediate bearing on the Chomsky-GG claims about recursion being a design feature of FL. That FL must be able to construct Gs with recursive rules does not imply that every G embodies recursive rules. Assuming otherwise is to reason fallaciously, not that such logical niceties have deterred Everett and friends.
Btw: I use ‘putative claim’ and ‘purported absence’ to highlight an important fact. Everett’s empirical claims are strongly contested. Nevins, Pesetsky and Rodrigues (NPR) have provided a very detailed rebuttal of Everett’s claims that Piraha Gs are recursiveless. If I were a betting man, my money would be in NPR. But for the larger issue it doesn’t matter if Everett is right and NPR are wrong. Thus, even were Everett right about the facts (which, I would bet that he isn’t) it would be irrelevant to his conclusion regarding the implications of Piraha for the Chomsky/GG claims concerning UG and recursion.
So what would be relevant evidence against the Chomsky/GG claim about the universality of recursion? Recall that the UG claim concerns the structure of FL, a cognitive faculty that humans come biologically endowed with. So, if the absence of recursion in Piraha Gs resulted from the absence of a recursive capacity in Piraha speakers’ FLs then this would argue that recursion was not a UG property of human FLs. In other words, if Piraha speakers could not acquire recursive Gs then we would have direct evidence that human FLs are not built to acquire recursive Gs. However, we know that this conditional is FALSE. Piraha kids have no trouble acquiring Brazilian Portuguese (BP), a language that everyone agrees is the product of a recursive G (e.g. BP Gs allow sentences to be repeatedly embedded within sentences). Thus, Piraha speakers’ FLs are no less recursively capable than BP speakers’ FLs or English speakers’ FLs or Swahili speakers’ FLs or... We can thus conclude that Piraha FLs are just human FLs and have as a universal feature the capacity to acquire recursive Gs.
All of this is old hat and has been repeated endlessly over the last several years in rebuttal to Everett’s ever more inflated claims. Note that if this is right, then there is no (as in none, nada, zippo, bubkis, gornisht) interesting “debate” between Everett and Chomsky concerning recursion. And this is so for one very simple reason. Equivocation obviates the possibility of debate. And if the above is right (and it is, it really is) then Everett’s entire case rests on confusing CUs and GUs. Moreover, as Wolfe’s piece is nothing more than warmed over Everett plus invective, its actual critical power is zero as it rests on the very same confusion.
But things are really much worse than this. Given how often the CU/GU confusion has been pointed out, the only rational conclusion is that Everett and his friends are deliberately running these two very different notions together. In other words, the confusion is actually a strategy. Why do they adopt it? There are two explanations that come to mind. First, Everett and friends endorse a novel mode of reasoning. Let’s call it modus non sequitur, which has the abstract form “if P why not Q.” It is a very powerful method of reasoning sure to get you where you want to go. Second possibility: Everett and Wolfe are subject to Sinclair’s Law, viz. It is difficult to get a man to understand something when his salary depends upon his not understanding it. If we understand ‘salary’ broadly to include the benefits of exposure in the high brow press, then … All of which brings us to Wolfe’s Harper’s piece.
Happily for the Sinclair inclined, the absence of possible debate does not preclude the possibility of considerable controversy. It simply implies that the controversy will be intellectually barren. And this has consequences for any coverage of the putative debate. Articles reprising the issues will focus on personalities rather than substance, because, as noted, there is no substance (though, thank goodness, there can be heroes engaging in the tireless (remunerative) pursuit of truth). Further, if such coverage appears in a venue aspiring to cater to the intellectual pretensions of its elite readers (e.g. The New Yorker, the Chronicle and, alas, now Harper’s) then the coverage will require obscuring the pun at the heart of the matter. Why? Because identifying the pun (aka equivocation) will expose the discussion as, at best, titillating gossip for the highbrow, at middling, a form of amusing silliness (e.g. perfect subject matter for Emily Litella) and, at worst, a form of celebrity pornography in the service of character assassination. Wolfe’s Harper’s piece is the dictionary definition of the third option.
Why do I judge Wolfe’s article so harshly? Because he quotes Chomsky’s observation that Everett’s claims even if correct are logically irrelevant. Here’s the full quote (39-40):
“It”—Everett’s opinion; he does not refer to Everett by name—“amounts to absolutely nothing, which is why linguists pay no attention to it. He claims, probably incorrectly, it doesn’t matter whether the facts are right or not. I mean, even accepting his claims about the language in question—Pirahã—tells us nothing about these topics. The speakers of this language, Pirahã speakers, easily learn Portuguese, which has all the properties of normal languages, and they learn it just as easily as any other child does, which means they have the same language capacity as anyone else does.”
A serious person might have been interested in finding out why Chomsky thought Everett’s claims “tell us nothing these topics.” Not Wolfe. Why try to understand issues that might detract from a storyline? No, Wolfe quotes Chomsky without asking what he might mean. Wolfe ignores Chomsky's identification of the equivocation as soon as he notes it. Why? Because this is a hit piece and identifying the equivocation at the heart of Everett’s criticism would immediately puncture Wolfe’s central conceit (i.e. heroic little guy slaying the Chomsky monster).
Wolfe clearly hates Chomsky. My reading of his piece is that he particularly hates Chomsky’s politics and the article aims to discredit the political ideas by savaging the man. Doing this requires demonstrating that Chomsky, who, as Wolfe notes is one of the most influential intellectuals of all time, is really a charlatan whose touted intellectual contributions have been discredited. This is an instance of the well know strategy of polluting the source. If Chomsky’s (revolutionary) linguistics is bunk then so are his politics. A well-known fallacy this, but not less effective for being so. Dishonest and creepy? Yes. Ineffective? Sadly no.
So there we have it. Another piece of junk, but this time in the style of the New Journalism. Before ending however, I want to offer you some quotes that highlight just how daft the whole piece is. There was a time that I thought that Wolfe was engaging in Sokal level provocation, but I concluded that he just had no idea what he was talking about and thought that stringing technical words together would add authority to his story. Take a look at this one, my favorite (p. 39):
After all, he [i.e. Chomsky, NH] was very firm in his insistence that it [i.e. UG, NH] was a physical structure. Somewhere in the brain the language organ was actually pumping the UG through the deep structure so that the LAD, the language acquisition device, could make language, speech, audible, visible, the absolutely real product of Homo sapiens’s central nervous system. [Wolfe’s emphasis, NH].
Is this great, or what! FL pumping UG through the deep structure. What the hell could this mean? Move over “colorless green ideas sleep furiously” we have a new standard for syntactically well-formed gibberish. Thank you Mr Wolfe for once again confirming the autonomy of syntax.
Or this encomium to cargo cult science (37):
It [Everett’s book, NH] was dead serious in an academic sense. He loaded it with scholarly linguistic and anthropological reports of his findings in the Amazon. He left academics blinking . . . and nonacademics with eyes wide open, staring.Yup, “loaded” with anthro and ling stuff that blinds professionals and leaves neophytes agog. Talk of scholarship. Who could ask for more? Not me. Great stuff.
Here’s one more, where Wolfe contrasts Chomsky and Everett (31):
Look at him! Everett was everything Chomsky wasn’t: a rugged outdoorsman, a hard rider with a thatchy reddish beard and a head of thick thatchy reddish hair. He could have passed for a ranch hand or a West Virginia gas driller.Methodist son of a cowboy rather than the son of Russian Askenazic Jews infatuated with political “ideas long since dried up and irrelevant,” products “perhaps” of a shtetl mentality (29). Chomsky is an indoor linguist “relieved not to go into the not-so-great outdoors,” desk bound “looking at learned journals with cramped type” (27) and who never left the computer, much less the building” (31). Chomsky is someone “very high, in an armchair, in an air conditioned office, spic and span” (36), one of those intellectuals with “radiation-bluish computer screen pallors and faux-manly open shirts” (31) never deigning to muddy himself with the “muck of life down below” (36). His linguistic “hegemony” (37) is “so supreme” that other linguists are “reduced to filling in gaps and supplying footnotes” (27).
Wowser. It may not have escaped your notice that this colorful contrast has an unsavory smell. I doubt that its dog whistle overtones were inaudible to Wolfe. The scholarly blue-pallored desk bound bookish high and mighty (Ashkenazi) Chomsky versus the outdoorsy (Methodist) man of the people and the soil and the wilderness Everett. The old world shtetl mentality brought down by a (lapsed) evangelical Methodist (32). Trump’s influence seems to extend to Harper’s. Disgusting.
That’s it for me. Harper’s should be ashamed of itself. This is not just junk. It is garbage. The stuff I quoted is just a sampling of the piece’s color. It is deeply ignorant and very nasty, with a nastiness that borders on the obscene. Your friends will read this and ask you about it. Be prepared.
 Actually, Greenberg’s own Universals were properties of languages not Gs. More exactly, they describe surface properties of strings within languages. As recursion is in the first instance a property of systems of rules and only secondarily a property of strings in a language, I am here extending the notion Greenberg Universal to apply to properties all Gs share rather than all languages (i.e. surface products of Gs) share.
 Incidentally, Wolfe does not address these counterarguments. Instead he suggests that NPR are Chomsky’s pawns who blindly attack anyone who exposes Chomsky’s fallacies (see p.35). However, reading Wolfe’s piece indicates that the real reason he does not deal with NPT’s substantive criticisms is that he cannot. He doesn’t know anything so he must ignore the substantive issues and engage in ad hominem attacks. Wolfe has not written a piece of popular science or even intellectual history for the simple reason that he does not appear to have the competence required to do so.
 It is worth pointing out that sentence recursion is just one example of recursion. So, Gs that repeatedly embed DPs within DPs or VPs within VPs are just as resursive as those that embed clauses within clauses.
 See Wolfe’s discussion of the “law” of recursion on 30-31. It is worth noting that Wolfe seems to think that “discovering” recursion was a big deal. But if it was Chomsky was not its discoverer, as his discussion of Cartesian precursors demonstrates. Recursion follows trivially from the fact of linguistic creativity. The implications of the fact that humans can and do acquire recursive Gs are significant. The fact itself is a pretty trivial observation.
It's Nevins, Pesetsky and Rodrigues (NPR).ReplyDelete
Isn't this the novelist Tom Wolfe, who once lampooned Leonard Bernstein for hosting a party for the Black Panthers?ReplyDelete
I think it is. BTW, thx for the T to R revision. It is done. sorry Cilene.Delete
I think Wolfe might have problems with certain kinds of people. At any rate, this piece is remarkable for how it close it comes to being truly reprehensible. Close enough to be obvious.
There's a bit of a myth that our paper (Nevins, Pesetsky & Rodrigues 2009) criticized Everett only on the facts of Pirahã, and somehow neglected to mention that even if his factual claims were right they would be "irrelevant to his conclusion regarding the implications of Piraha for the GG claims concerning UG and recursion" — but actually we made that point quite strongly. And what interested us about the factual claims was not merely the observation that they are not well-supported, but the observation that when you look at Everett's own data from the perspective of real modern linguistics (not the cartoon versions), you're struck by how well Pirahã fits the picture already painted by other languages — which in the end was the paper's main point.ReplyDelete
A few other comments. Everett would not be put off by Pirahã transplanted to the cities acquiring Portuguese, because he's not claiming that they are genetically special. His claim (to the extent one can pin it down) is that somehow Pirahã culture is incompatible with acquiring or using syntactic recursion. Consequently, as long as they live a hunter-gatherer existence in their villages, they cannot compute things like subordinate clauses and recursive possessors, but if the bright lights of the city shine in their eyes and they are acculturated to Brazilian society, subordinate clauses and recursive possessors become computable for them. I'm not going to say that this makes much sense — indeed, it makes less sense than a claim of a genetic difference would make — but that's what was said.
So what was the claimed relevance of Pirahã to Chomskyan linguistics? The starting point is the bizarre meme (it's there in Wolfe too): "Chomsky thinks all languages must have subordinate clauses". This claim is supposed to have been advanced in the well-known 2002 Science paper by Hauser, Chomsky & Fitch (HCF). That paper defined what its authors called the "narrow component" of the faculty of language" (FLN) as "compris[ing] only the core computational mechanisms of recursion as they appear in narrow syntax and the mappings to the interfaces" — contrasted with the broad version FLB that includes the interfaces. Famously, HCF then speculated that only FLN is uniquely human. So how do we get from this evolutionary speculation to "Chomsky thinks all languages must have subordinate clauses"?
The chain of non-reasoning goes something like this:
(1) If FLN comprises includes a capacity for X, and X is "uniquely human", all languages must use X.
(2) What HCF meant by "the core computational mechanisms of recursion" is subordinate clauses.
(3) The words "and the mappings to the interfaces" (intended to include most of what linguists work on) are meaningless and can be ignored.
Conclusion: all languages must use subordinate clauses.
Excitement: "Hey, Pirahã doesn't use subordinate clauses, so Chomsky and his colleagues are wrong (about everything)!"
Yes, premise (1) makes no sense, and (2) and (3) are false — but good luck getting this across to the big bad Wolfes of the world. I haven't found the trick yet. It's linguistics' version of birtherism. Crazy.
In fairness, subordinate clauses are mentioned as a "for instance" by Hauser, Chomsky & Fitch, and the abstract to that paper omitted the words "mappings to the interfaces" (an editorial mess-up). So there was some excuse for a few minutes of confusion about (2) and (3) in 2002 — maybe. But there's no excuse for confusion about the package as a whole, given how many times the issues have been clarified in multiple venues over the past decade, from our paper to multiple well-written sources on the Internet, for example here (read parts 2 and 3 too!) and here.
Very useful thx. Two points:Delete
1. The aim has never been to convince Everett or Wolfe or many others. They are beyond redemption IMO. The aim is to make clear that the whole "debate" rests on a pun, an equivocation. The fact that some G might not exhibit a "universal" feature is a problem for a Greenberg conception of universals, but not the Chomsky/GG conception. It is natural to think that 'universal grammar' means universal in all grammars. This is indeed how Wolfe understands matters as his discussion of the law of recursion and the law of gravity makes clear. So, this natural understanding by the non-expert needs to be made cleat. My experience is that when it is that rational people understand that Everett's claims then disintegrate (witness the thread in the Chronicle version of this same farce).
2. Note, that if things were as Everett and Wolfe claim, viz. that Piraha speakers failed to have a sense of number and had "limited" cognitive options available to them in virtue of having "roimitive" Gs then this would, in an odd way, CONFIRM Chomsky's idea that recursion really matters for actual cognition and so support his evolutionary views. What would be mysterious is how Gs could have such an effect and it would argue for a very strong connection between recursion in an actual language and cognitive facility overall. This would suggest that the language module has widespread cognitive reach. It would also be a most impressive evidence for some version of the Sapir-Whorf hypothesis. Of course, I believe none of this. I think that Everett has shown next to nothing. But, if one WERE to go down the road he describes, even then none of the things he actually concludes follows. He is not only wrong actually, he is wrong counterfactually and you gave the reason why: most of what he argues borders on the nonsensical. Birtherism seems just right.
It is natural to think that 'universal grammar' means universal in all grammars. This is indeed how Wolfe understands matters as his discussion of the law of recursion and the law of gravity makes clear. So, this natural understanding by the non-expert needs to be made cleat.Delete
It does. Has it occurred to you to blame Chomsky for not explaining what he means? If, by "recursion is a feature of universal grammar" he means "all humans are capable of learning a grammar that contains recursion", why doesn't he just say so?
Everett, as the third of four authors, has more recently explained what he means and what he doesn't mean (pdf). I'll quote all of slide 7:
Background: some terminological confusion
2. The term Universal Grammar (UG):
(a) Chomsky (in more recent discussions of Everett’s work) and Nevins et al.  assume that UG is whatever is biologically necessary to learn human language.
(b) Everett uses the term to refer to a specific claim about the nature of human language from HCF: that it allows recursion (self-embedding) in the syntax
• It makes no sense to falsify UG in the sense of (a): this is just a descriptive term
• Everett intends (b)
The chain of non-reasoning goes something like this:
(1) If FLN comprises includes a capacity for X, and X is "uniquely human", all languages must use X.
Well, sure. "FLN comprises includes a capacity for X, and X is 'uniquely human'" really does imply that languages which lacked X would be "not human", and therefore not be found to be spoken by humans.
I do agree that this doesn't logically follow. But I find the implication hard to escape, and apparently it comes naturally to a lot of linguists.
When I started to write scientific papers, the very first thing my thesis supervisor said to me was "you will be misunderstood – by someone, at some point, for some reason –, so it's your job to leave as few chances for misunderstandings to occur as possible". That was just 10 years ago. Chomsky has been publishing for half a century or more, and apparently he still hasn't noticed.
In the same spirit of clarity I'll spell out a few things that are logically irrelevant to the above, but perhaps not logical enough:
1) I'm not going to defend Wolfe. I haven't read his article, or his novels for that matter; all I know about him is that he's conservative even by American standards.
2) There are linguists out there who strongly disagree with Chomsky's linguistic work but like his politics just fine. I'm told there are also people who like his linguistics but not his politics.
3) Recursion as a Greenberg-style universal doesn't depend on Pirahã alone; several Australian languages have recently been claimed to lack it as well.
This comment has been removed by the author.Delete
"Everett uses the term to refer to a specific claim about the nature of human language from HCF: that it allows recursion (self-embedding) in the syntax"Delete
If that is what Everett had in mind, then he got it more or less right (in that particular slide — and let's not quibble about the "self" in "self-embedding") — but then Pirahã truly can't be a problem for HCF's proposal, for exactly the reason Chomsky and others have been giving: "allow" ≠ "require".
But unfortunately, that text looks like a slip on the part of Everett and his coauthors in that talk, since taken at face value, it negates everything he's claimed to be arguing about. Which brings me to the next point relevant to your comment. What we're supposed to be doing in linguistics (as in any serious field) is finding ways to evaluate proposals worth arguing about, and (of course) we're also supposed to push the field forward by coming up with such proposals ourselves. It's not a game of gotcha. If Chomsky gives an interview tomorrow after too little sleep and accidentally says "language is not innate", when he should have left out the negation, that doesn't mean you've caught him contradicting his life's work and victory is yours. Even supposing he or someone else is unclear when they try to say something, they have the right to clarify what they actually meant. The more general point: we're not supposed to be arguing about people at all, but about ideas and proposals. So if HCF were unclear about something, it's not only their job to clear up the confusion (which they did on most points in their later reply to Jackendoff and Pinker in Cognition and Chomsky's on-line addendum) — it's also our job to help them clear it up if we want to build on their work, and to base our own research on the clear version. (By the way, that's what Nevins, Rodrigues and I did — had to do — repeatedly in our article with respect to Everett's claims, as should be evident from the text.) In the case at hand, fourteen years after HCF's article was published, for linguists and journalists to carry on about a misinterpretation of their prose that has been clarified repeatedly is just not right.
Chomsky introduced the term 'Universal Grammar' in The early 1960s in the context of discussions of Cartesian approaches to language ('Aspects' being the classical discussion). He made it very clear there that it was a claim about FL not about Gs. He has repeatedly made this point clear. So, experts (or those who consider themselves so) should be conversant with the technical usage of the term. Before Chomsky there was no term 'Universal Grammar.' Greenberg discussed universals, but not universal grammar. So, given that he introduced the term and there was no similar term in the literature and it was coined to acknowledge earlier forgotten Cartesian work that advanced a similar (though not identical) thought, it seemed ok to use the term in this sense. So, did Chomsky deceive? Nope. Did Everett misunderstand? Well either he did and he is wrong or he didn't and he is dishonest. Your pick.Delete
As for coming naturally to a lot of linguists, well I will let you judge if this is so. Regardless, it is not what Chomsky meant or has proposed. I suspect that Chomsky does not think that there are many Greenberg style universals, and is not in any way committed to their existence. Now, if Everett is interested in arguing that Piraha falsifies the (quasi) Greenbergian claim that all Gs are recursive, then that's fine with me. It is, of course, an empirical matter and I am happy to let the chips fall where they may. However, do you really think that ANYONE would care if Everett showed that Greeneberg was wrong? Would his work be discussed in the New Yorker/Chronicale/Harper's? I think it is safe to say that the answer to this is NO. The only reason that anyone attends to Everett is that he has made the claim that his work shows that Chomsky and UG is wrong. But we all agree that he cannot have shown this. Thus, continuing to make these claims after being repeatedly "corrected" regarding the irrelevance of his claims is suspect. I actually think I know what he is slow to correct this error. He and his work would fall into well deserved obscurity. Hence, why correct the misimpression? There's no future in that, just honor.
Also, on the alleged lack of subordinate clauses in "several" Australian languages, see the claims by Levinson (2013) and response by Legate, Pesetsky and Yang (2014), in Language.Delete
Points taken. Next point: If "recursion is a feature of UG" merely means "all humans can learn a grammar that contains recursion", why bother making the statement? Wouldn't that make every feature found in any human language a feature of UG?Delete
So, did Chomsky deceive? Nope.
What, intentionally?!? Of course not, that's not my claim or (I hope) anyone else's.
However, do you really think that ANYONE would care if Everett showed that Greeneberg was wrong? Would his work be discussed in the New Yorker/Chronicale/Harper's?
"Amazonian language fundamentally unlike any other" does make a fetching headline. If a language lacks this pretty basic feature that apparently all others have, there's something to explain here, no matter what that means for any theoretical frameworks few people have heard of.
Wouldn't that make every feature found in any human language a feature of UG?Delete
If we’re talking about specifically grammatical features, then this is approximately true under one reading and false under another. We assume as an idealization that any grammatical property of a given language is compatible with the principles of UG. But it’s certainly not the case that everything we pretheoretically take to be part of the grammar of a given language corresponds to a distinct feature or component of UG. For example, many languages have reflexive pronouns that require local antecedents. This does not necessarily mean that UG itself has any principles or operations that apply to reflexive pronouns as such. It just means that UG and other relevant cognitive systems are such that languages with reflexive pronouns of this sort are acquirable. For this reason, the hypothesis that UG contains special principles regulating the distribution of reflexive pronouns is not in any way trivially true. In this sense, not every grammatical feature of a human language is a feature of UG.
In the particular case of recursion, it is pretty clear that UG must contain some kind of recursive structure-building operation (or some principle that licenses arbitrarily nested structures), since we clearly find embedding of arbitrary depth in some languages. Chomsky, Hauser and Fitch made the much stronger claim that Merge is the only component of the Faculty of Language in the Narrow sense. Everett, for whatever reason, zoned in not on this stronger claim, which can certainly be challenged, but on one of its uncontroversial entailments. It’s roughly as if Chomsky said “the only principle of FLN is an operation that sequences two sounds to make a longer sound”, and a phonologist then tried to refute this (absurd) claim by finding languages that have no sentences longer than a phone. In both cases it’s the ‘only’ that makes the claim interesting.
Alex D addressed your first point well. Let me just add that what makes the fact of recursion interesting is that it is not something that is easy to acquire without substantial "innate" machinery. This is what makes the fact interesting. Given Chomsky's abiding interest in Gs as indicators of the structure of FL the fact that some Gs are recursive and is acquirable by any human kid is of significant interest.Delete
Second, we agree that Chomsky did not deceive. However, I would go further and say that he did not really mislead. He introduced the term is a natural way in the context of discussions of earlier Cartesian conceptions of grammar. He wanted to signal that his views were continuous with theirs and so chose the term 'universal GRAMMAR,' with the emphasis on the second word. This then got confused with Greenberg universals, despite Chomsky being careful to distinguish the two. So, if there is confusion it was not Chomsky's due to Chomsky. I should add that I can see why non-experts might be confused, but professional linguists are paid to understand the difference. True they are not paid a lot, but enough to be able to distinguish two senses of 'universal.'
Last point: there may be something to explain here, or there might not be. The question is what the source of the surface non-recursion is. I noted, cryptically no doubt, in another comment that surface non-recursion might still involve a recursive sub part in the G coupled with a filter preventing the recursion from being exercised. If this is so, then the question is how the filter is acquired, not whether the G contains recursive rules. It does.
An analogy which I used before might help. Cars can be equipped with governors that limit their speed, say to 70mph. The car engine and design is IDENTICAL to a car with a top speed of 120mph. The only difference is that it is capped to not accelerate above 70. The right "theory" of this does not propose an entirely new kind of engine, but the same kind the 120 mph car has plus a "filter" to limit the output. One can imagine that Piraha as described by Everett is the same: same recursive core plus a filter. If this is what is going on then Piraha Gs are effectively the same as English Gs plus a filter. The only interesting question is the source of the filter, not whether or not the G is effectively recursive.
As for the fetching headline, to my recollection this is NOT the one that all posted. Rather it was Chomsky wrong! There is no Universal Grammar. Maybe they should have gone with yours, but the fact that they didn't, none did, indicates to me where the real action lay. In fact, Everett's own emphasis- I proved Chomsky wrong!- says much the same thing.
"Let me just add that what makes the fact of recursion interesting is that it is not something that is easy to acquire without substantial "innate" machinery. This is what makes the fact interesting. "Delete
I think you are right that this idea is part of what makes the fact interesting, but I don't really agree with the claim that acquisition of recursion is particularly hard. Could you expand a bit?
I think it's a question of how you get to induce from a finite set of examples in the input to an unbounded set in the grammar with no evidence in the input for unboundedness (it's a version of Hume, I think). Think of DP genitives in English: what would stop an analysis of these as involving a rule that basically says you have a Determiner then however many NPs you have evidence for in your input each followed by 's , then the final NP. So say you heard in your input a maximum of 3 such NP's (the woman's old friend's dog's tail). Nothing licenses you to go beyond this without a specific capacity that legitimates a recursive structure of the sort [_DP DPs [_NP ... N ... ] ]. So something must allow the child to jump to the recursive rather than the limited iterative rule in the absence of evidence that there is a recursive rule. Hence the need to appeal to recursiveness `built in' as opposed to emergent from, say, analogy/chunking, etc.Delete
I think that's a good argument but it's maybe too strong? Because any learning algorithm will go beyond the input without evidence. And that inductive step needs to be licensed by some innate mechanism which is built in. So I don't see what's special about recursion in this context, versus any other learning mechanism that generalizes.Delete
Hi Alex: There is a major difference. Indeed, many learning models generalize from a finite number of examples to an open-ended class: you sees a bunch of barking dogs and conclude that all dogs bark. But recursion is not just infinity: it is recursive! David's example of English possessives is excellent but it is unclear how a learner, even one innately endowed with recursion, can go from 2 or 3 to infinity. The solution must be able to handle cases in, say, German, where the embedding stops at one (Marias Haus, *Marias Nachbars Fruendins Haus). A potential solution, which amounts to a strong claim, is that if recursion is observed at level 2, then it must be unbounded--assuming that the learner "knows" recursion. Note that this cannot be a general learning strategy: it's nuts to go from two examples to infinity.Delete
Right, so Hume's induction problem is everywhere, meaning that even the most empiricist among us have to be, as Quine put it, 'up to their necks in innate learning mechanisms'. But the issue here is bought out, I think, by my example. Why do speakers of English uniformly have a recursive structure for possessives like (2), as opposed to one that could be captured by just putting a Kleene star on the intermediate NPs, like (1). So it's not only about why they generalize, it's why they generalise towards the recursive structure, as opposed to either not generalizing, or generalizing towards iterated flat structures (which we do sometimes, but in very very very very limited cases ;-)).Delete
1) [Det NP's* NP ]
2) [_DP DP's [_NP NP ] ]
We can say it's because the semantics is recursive, but of course that begs the question. I guess an empiricist alternative would be to say that there's a mechanism that learns the distribution of X includes itself, but I'm not sure how that could work in the example I have, as the distribution is actually different (so a full possessive DP has the distribution of determiners, though not entirely (e.g. the dog/the man's dog/*the man's the dog/the man's every wish/*the every wish). I'm also not sure I'd understand how such a mechanism is any different from just stipulating the existence of recursive structure, or how it would follow from something deeper (like the Chater notion of bottlenecks, say). I'm quite interested in that question, so if you have any pointers to literature, I'd be grateful.
But of course the issue is also much wider than just my possessives example. Time and time again, speakers of human languages seem to have encoded their grammars via a mechanism that leads to recursively structured outputs when the data (as people in usage based grammar have shown) seems to be that people actually use these structures quite rarely (which, I'd say is really a great point in favour of a generative approach to grammar, though the usage based people seem to think the opposite). Moreover, in new sign languages, these recursive structures emerge extremely fast, so they don't seem to be a side effect of slow movements towards a recursive language across multiple generations a la Kirby. Their ubiquity (even accepting Dan's Piraha case as a counterexample, which is of course tendentious) is a fact that calls for an explanation, and that explanation is not given unless you there's something special linking the human linguistic capacity to a mechanism that recursively embeds structure (and here, I'm not even requiring category recursion, just a recursive/inductive system for structure building). If there's a better story, I'm not sure I know it.
So not only do we need substantial innate structure (which was Norbert's point), that structure seems to be particularly prominent in language.
Well I agree with quite a lot of that, especially the fact that we don't yet have very good stories about language acquisition, but I wasn't trying to have another rationalism vs empiricism battle. I was just interested in the implicationDelete
A) learning recursion requires rich innate structure.
and the non-implication
B) iteration does not require rich innate structure.
I was at this interesting workshop in Leiden organised by Rens Bod and Stefan Frank and Morten Christiansen, and many of the participants accepted A), but of course took it to be a modus tollens, and concluded that since rich innate structure is clearly false, there can't be hierarchical structure in language. So I think it is quite important to see if A is true or false.
I take A to be false. Which is not to claim that we have good models of language acquisition but we do have simple models that can learn recursive structures in their simplest forms (CFGs). Even if, to repeat, these are not adequate as models of language acquisition for many reasons.
So language acquisition may require rich innate structure (I try to keep an open mind) but it is not because of recursion per se.
But I think I am missing something: what sort of story of the acquisition of recursion are we talking about that uses rich innate structure? Because we may be just disagreeing about what "substantial" means, which wouldn't be very productive.
@AlexC: This seems to me like a basic Fodor-ian disagreement. What the others are saying is that if you have a model that can learn CFGs, then that model was ipso facto capable of representing recursive structures, in the first place, before the first bit of training data was ever encountered. (E.g. it includes a data structure capable of storing the results of the learning procedure, and an ability to interpret those results, at least in some instances, as though they encode a CFG.) But where did the model get its capacity to represent recursive structures like CFGs? Certainly not from the data, since the data is finite and thus could not license such an inference. The only remaining option is that the assumption that what you are learning might be a CFG (and thus, recursive) was pre-baked into the model in one form or another.Delete
Note that this holds even if your model does not presuppose that the result of the learning procedure will always be a CFG. Suppose, for example, you have a model that can learn either a finite string language or a CFG, and it decides which representation to use based on a Minimum Description Length-type of metric. You can obviously induce such a model to switch to the CFG representation using a finite amount of training data; but the point remains that the model had an in principle capacity for recursion before the first training data was even encountered. It is that capacity that I think people are talking about here.
I agree with you that whether this hypothetical example I just sketched counts as coming pre-baked with "rich" or "substantive" structure is a matter of terminology. But the deeper point, I think, is that the capacity for representing recursive structures (as well as a bias towards imposing such representations on the data, at least in some cases) was already there.
I like the way David and Omer put the issue. You don't learn to recurse. Rather you come to the induction problem presupposing you might have to. Its a prcondition of acquisition, not a result. Whether this is a rich or weak assumption goes beyond my capacity to judge. I doubt the question is well formed. Lucily answering it is irrlvant to the interesting questions concerning the kinds of hierarchical recursion we find in language.Delete
@Omer: I definitely find the idea of a (domain-general ?) innate bias for hierarchical structures fairly plausible for the reasons that various people like Herb Simon and so on have put forward. So if that is what the claim is, then I don't disagree (well not very much anyway).Delete
But I don't think this differs then from iteration, since exactly the same Fodorian argument would go through there; namely, you need the ability to represent a regular grammar or flat structure etc etc.
Let's drop "rich" as it is an evaluative term. You need innate structure to project from finite data to a generalization procedure. The procedure you need in the case for language is one that generates an unbounded number of structured phrases. These have certain properties that linguists have described for the last 60 years. The generative procedure is recursive in the special way that it produces such structured hierarchies with the properties they have. What is required for this? That's how much innate structure we need. Is this rich? It will depend on your point of view. But who cares. It is what it is. The important point is that it generate an unbounded (in length and depth) number of structured phrases, and this seems to require rules that take their outputs as inputs at the very least, and that preserves the hierarchy of the inputs in the outputs.Delete
This comment has been removed by the author.ReplyDelete
(Side remark: A proposal that all languages must allow subordinate clauses might be quite interesting and worth exploring (so if a language looks like they're missing, it's not true -- there's an intervening factor hiding them from view). It just happens not to have been made by the people who are claimed quite crucially to have made it.)ReplyDelete
One side comment as well: the claim that Gs as a whole display outputs that do not require recursive rules in the G does not imply that some of the rules in the G are recursive. They may well be, but the G is coupled with a filter that blocks these recursive outputs from appearing. Even for finite languages it is often useful (for purposes of compact representation) to assume that the relevant rules comprise a recursive part that can go on infinitum and a filter that intersects with it. So, Gs as a whole might not tolerate outputs with unbounded embedding while still containing recursive operations.Delete
Yes, exactly - thanks. If you look at our paper, bottom of p.365 through the next page or two, we make precisely this point!Delete
I'm just starting to read this article and find it disgusting. I thought that you were being a bit hyperbolic at first but it turns out your descriptions are spot on.ReplyDelete
I find it so ridiculous that Chomsky is being accused of being an ivory tower academic not willing to dirty his hands in the field given the time he spent in jail cells in the U.S. and in Vietnam and Laos, living in a village and listening to poor peasants who were getting the shit bombed out of them by U.S. warplanes. Not to mention his work in Latin America and the whole society of linguists inspired by his work that documented indigenous languages there and acted to better their lives in the face of violence and repression. It's truly beyond belief.
That was the part of the Wolfe piece I found most reprehensible - he actually accused Chomsky of leveraging anti-Vietnam sentiment to personal gain. That's about the most disgusting invective I've ever seen come from someone so out of touch with reality.Delete
He clearly values style over substance, publicity over science. e.g. the validity of the arguments in NPR is treated as irrelevant; all that's relevant is the level of media attention.
Recursion understood as Merge, which is clearly what Noam has meant at least since the late 1990s, makes clausal embedding even less relevant.
I think we might be able to do better with the CU's by finding ways to show that they give rise to what are basically GU's (typically conditional, perhaps often probablistic: "if you see X, you will (usually) also see Y)", because without such a conversion path, they can be easily regarded as abstract vapor, and I think are so regarded by Everett, the 2 Evanses, Morton Christiansen, etc.ReplyDelete
For example from the Russian agreement in David's book you can formulate a conditional universal along the lines that if an NP shows internal agreement discrepancies where some components are marked on a semantic basis and others on a nonsemantic basis, and the outer layer is semantic, then any agreeement by verbs will semantic, with the outer layer. Cases like Russian, where the semantically agreeing modifiers are on the opposite side of the formally marked head than the semantically marked verb, are a prima facie problem for Morten Christiansen etc, and the more of them can be found, the worse it gets. Any semi-reasonable theory of formal syntax predicts that, but linear structure does not.
Examples that involved relative corpus size would also be good, for example along the general lines of Charles Yang above, presumably the corpora to which English and German child learners are exposed to start out pretty similar in the complexity of their prenominal possessors, since the complex ones of English are rare, but presumably at some point the English learner is exposed to enough examples such as 'John's sister's dog' or "the little girl's horse" that they pick up a full recursive rule rather than the limited one, but how much is enough? The form of argument is supposed to be that if we get X in a small corpus, we will get (or perhaps are only likely to get) Y in a (ideally much) bigger one, or in tests of intuitions.
This comment has been removed by the author.ReplyDelete
[Martha McGinnis-Archibald:] Here's an even more embarrassingly ill-informed "Chomsky was wrong" article, pointed out by Keir Moulton: http://www.cbc.ca/beta/news/canada/edmonton/chomsky-was-wrong-university-of-alberta-study-finds-english-is-pretty-hard-1.3734028ReplyDelete
You guys are pitiful: reactionary, defensive, obfuscating and childish. You talk only to each other and can't see the simple truth that Chomskyan linguistics has been a waste of time.ReplyDelete
If someone has developed an alternative approach to language that leads to a fraction as many insights and discoveries, it's a well-kept secret.Delete
However, we need to find some new ways of explaining ourselves, since the old ones do not appear to be working very well.Delete
Have there been any other proposals in the decade or more since Everett's IEP that have linked aspects of culture to specific grammatical properties like that supposed lack of recursion? So something connecting culture and V2 or pro-drop or ergativity? And if not, why not?Delete
Fred Karlsson has quite a bit of work on the effect of the introduction of writing on the prevalence of recursive constructions in various traditions. I don't think there's anything more specific.Delete
This comment has been removed by the author.ReplyDelete
QUESTION: I’d like to ask you about another of your detractors. When Bill Moyers interviewed Tom Wolfe on PBS, Wolfe accused you of subscribing to the “cabal” theory of capitalism. In Deterring Democracy you refer disparagingly to his description of the Reagan era as “one of the great golden moments that humanity has ever experienced.”ReplyDelete
CHOMSKY: For people at his income level, that’s quite true. In my view, it was crucially responsible for — not 100 percent — the catastrophe of capitalism that just devastated the Third World in the Eighties. It was what they call the “lost decade” in the Third World. Tens of millions of people suffering and dying. In just the years 1980 to ’88, South African terror around its borders, supported by the United States, was responsible for about a million and a half people killed. If you count up the children who died of malnutrition as income levels dropped, you get a real monstrous toll. It’s bad enough what happened in the United States, if you look at any group other than the privileged. If you add all that up, it’s been a very ugly period. A person who could call that one of the golden moments in history… well, take Germany in 1939. A person who could call that one of the golden moments in history, we’d know what to think of him.
This is from 1992, and Chomsky appears to have quoted him in a book in 1991. Is this where Wolfes animosity stems from? Does it go back further?
Also, you asked why Harper would publish something like this instead of something a little more relevant. Might Wolfe being married to one of the editors have something to do with it?
Everett Freeman was born on 2 February 1911 in New York, New York. His contribution to books, movies, radio and television shows spanned more than 50 years.ReplyDelete
Invisalign Everett MA
FYI, in their October issue, Harper's has published a letter of mine in response to Wolfe:ReplyDelete