Here are some recent things that I found interesting that
may interest you as well.
On MOOCish matters:
http://www.voxeu.org/article/disruptive-potential-online-learning. The big finding is that employers don’t like MOOCs that much and treat them
as inferior degrees. This would change if, for example, places like Harvard and
MIT and Stanford substituted the 4 year college experience they currently offer
to elites with a MOOCish experience. When the well-to-do vote with their kids’
feet and buy into MOOC based degrees, then everyone will. Till then, it will
largely be a way of bending some cost
curves (and you know whose) and not others.
Dan Everett (DE) still doesn’t understand what
a universal is: http://fivebooks.com/interviews/dan-everett-on-language-and-thought.
This little interview
is filled with exciting tidbits. Here are three:
(i)
Sapir’s
hypothesis concerning the interaction of language with thought is far more
modest than many have assumed. On DE’s interpretation, the Sapir-Whorf
hypothesis is not the rather exciting (but clearly wrong) view that, the language we
speak determines the way we can think” but the rather modest claim that “the
language we speak affects in some way some of the ways we think when we
need to think quickly.” Note the Kahnemanian tinge here. IMO, this is hardly an exciting thesis, and
it is little wonder that the strong version of the thesis is what aroused
interest. The weak version seems to me close to a truism.
(ii)
But a truism that Everett is impressed with. He claims that Sapir
discovered that “culture can influence language” and that though language
“clearly has some computational aspects that cannot be reduced to culture…there
are a number of broad characteristics that reflect the culture they emerge
from…”(3). I confess that this strikes me as obvious and is the first thing a neophyte learning a second
language focuses on. So, though Sapir is deserving of honors, it is not because
of this “insight.” Curiously, Everett seems not to have noted that Sapir’s
first observation (i.e. that language is a kind of computational system) does
not impress him. Maybe that’s why Everett has problems understanding claims
that people imake about such systems. In particular,
(iii)
DE still
confuses Chomsky Universals with Greenberg Universals. It comew across in DE’s
discussion of recursion where he once again asserts that the existence of a
finite language would undermine the Chomsky claim that language is recursive
(see answer to question 2). This is not
the claim. The claim is that UG produces Gs that are recursive. So the fact that FL endows humans with the capacity to acquire Gs that are
recursive does not imply either that every language has a recursive grammar or
that every speaker uses this capacity to produce endlessly large sentences. So,
evern were Piraha a “finite language” as DE claims (and which, truth be told, I
still do not believe) it implies nothing
whatsoever for Chomsky’s claim that it is a fact about FL/UG that language is
recursive. This is simply a non-sequitur based on DE’s misunderstanding of
what GGers take a universal to be (note his claim would be valid were he
understanding ‘universal’ in Greenbergian terms). However, do note expect DE to
ever loose this misunderstanding. As Upton Sinclair once noted: “It is
difficult to get a man to understand something when his salary depends on his
not understanding it.” What do you think the odds are that DE would be getting
interviewed here or featured in the New
Yorker or the Chronicle of
HigherEeducation were he not peddling the claim that his work on Piraha
showed that Chomsky work in linguistics was incorrect? Do I hear 0?
(iv)
DE does
not appear to understand that Gs can be recursive even if utterances have an
upper bound. I am not saying that
this is what is the case for Piraha. I am saying that recursion is a property
of Gs not of utterances. A mark of
recursion (i.e. evidence for recursive mechanisms) can be gleaned by looking to
see if the products of this mechanism are unbounded in length and depth. But the converse does not obtain: Gs might be
recursive even if utterances (their products) are bounded in size. DE seems to
think that during language acquisition, kids scale the Chomsky hierarchy, first
treating them as finite lists and then as generated by regular grammars and
then by context free and then…all the way to mildly context sensitive. Where he
got this conception I cannot fathom. But there is no reason to think that this
is so. And if it is not, then given that Piraha speakers can learn what even DE considers recursive languages (a bad
locution, by the way, given that ‘recursive’ is properly speaking a predicate
of grammars, and only secondarily their products) like Portuguese it is clear
that they have the same UGs we all
do. And if this is right, then it is quite unlikely that they would not acquire
a recursive G even for Piraha. But this is a discussion for another time. Right
now it suffices for you to know that DE, it appears, cannot be taught and that
there is still a large and lucrative market for “Chomsky is wrong” material.
Big surprise.
Genes and languages: The Atlantic
has a little piece showing that some “languages and genes do in fact share
similar geographical fault lines.” Apparently, whether this was so was a
question of interest to linguists. As the paper puts it: “Using new dataset and
statistical techniques, the researchers were able to scratch an itch linguists
and demographers have struggled to reach.” I confess to never having had this
itch so I am not sure why this observation is of particular interest to
linguists.
It is quite clear that
whatever genetic change occurred did not affect the basic structure of FL. How
do I know this? Because, so far as we can tell any kind can still learn any
language in roughly the same way any other kid can. And, from what we can tell,
all Gs obey effectively the same kinds of general structure dependent
constraints. So, whatever the genetic changes, they did not affect those genes
undergirding FL/UG. Nor, so far as I can
tell, is there any reason to think that the phoneme properties and genetic
features that are tracked are in a causal relation (i.e. neither is the root
cause of the others change). It just seems that they swing together. But is
this really surprising? Don’t people who have similar phonemes tend to live
near each other? And as these kinds of genetic changes are subject to
environmental influence is this really a surprise?
Maybe this is
interesting for some other reason. If so, please post a comment and let me know
what that interest is. I would love to know. Really. Here’s the link:
Some philo/history of science: I enjoyed this little piece
mainly for the discussion of the relationship between realism and mathematics
in the physical sciences historically. It suggests one way of understanding
Newton’s famous line about not feigning hypotheses. His theory gave a precise
mathematical understanding of gravity. He thought that this was enough and that
metaphysical speculations concerning its “reality” were not required from a
scientific theory. This was enough. At any rate, there has been lots of
intellectual pulling and pushing about how to understand one’s theoretical
claims (e.g. realistically, instrumentally) and it is interesting to see a
little history.
Replication/Reproducibility
and stats in science: Here’s
yet another paper on replicability in the sciences (https://www.sciencenews.org/article/redoing-scientific-research-best-way-find-truth). Many factors are cited as creating problems,
but the one that I thought most provocative is at the end:
Much of the controversy has centered on the types of statistical analyses
used in most scientific studies, and hardly anyone disputes that the math is a
major tripping point…
There is a case to be
made that though statistics is in
principle useful, applying it correctly is very very hard. It’s one of
these things that are better in theory than they are in practice. And maybe any
paper dressed up in statistical garb should ipso
facto be treated cautiously. Right now we do the opposite: stats lend
credence to results. Might it be that they should be treated with suspicion until proven innocent? (For some useful
discussion how even the best intended can go statistically astray see this recent piece by Gelman and Locken.)
One great scientist
who was very suspicious of statistical results, it seems, was Ernest
Rutherford. He was working at a time when physical theory was far more advanced
than anything we see in our part of the sciences. Here’s what he said: “If your
experiment needs statistics, you ought to have done a better experiment.” The
problems with replication seem to lend his one liner some weight, as does the
apparent difficulty inherent in doing one’s stats correctly.
Let me use this space to advertise an Introduction to Linguistics MOOC we are making for the University of Leiden, and Coursera. I don't think anybody involved inj creaating this particulkar MOOC thinks that courses such as this are going to replace real courses, although they may replace books and other study *material*.
ReplyDeleteI think it's a mis-step for MOOCs to sell themselves as alternatives to college education (and they DO sell themselves now; paid versions of courses are granting fancy 'certificates'), and also for people to expect that employers should ever look favourably on them as such an alternative.
ReplyDeleteIt seems to me that MOOCs can have one of two benefits. The first is to give people practical, employable skills like programming. Want to learn? Take a MOOC. But then when you go to an employer, you hardly need to tell them you've taken a MOOC in it, you just need to show you can do it.
The second benefit is to free up time in the college environment in order to reshape education there. Need to learn some fundamentals like calculus, probability, syntax or the like? Take a MOOC in that and then degrees will become about the more interesting applications and explorations of these that can't be replicated with distance learning.
That said, I'm rather partial to William Deresiewicz's NR piece which was widely criticised http://www.newrepublic.com/article/118747/ivy-league-schools-are-overrated-send-your-kids-elsewhere Much of top-end higher education functions like elitist branding to allure financial consultancy scum.
My only concern with using MOOCs like this is that people underestimate how much it's the case that *time*, as well as money, is largely a preserve of the rich, and if this means there will be greater competition amongst applicants to have much more preliminary experience through taking MOOCs alongside high school, then the fact that MOOCs are free isn't going to mean the benefits are shared equally.
Norbert, my reading of the Gelman and Loken article is that post-hoc justification (statistical or otherwise) for the data is close to useless, since there are simply too many analytical/theoretical possibilities. In some sense, the criticism applies to standard linguistic methodology too. Where, to my eye, a lot of post-hoc analyses are presented as predicting the data that has been analysed. The only reason that linguistic theories (primarily, syntactic/semantic analyses) might be less susceptible to the problems assessed by G&L is that the data that has been dealt with is relatively clean - not much variance (“low-hanging fruit”). The moment, the variance becomes a problem (with subtle judgments), post-hoc justification becomes a bigger problem. Phonological discussions have already started suffering from this problem because they deal with a lot of gradient phonotactic/fuzzy data these days.
ReplyDeleteSo, I see Gelman & Loken as warning researchers to submit to separating true predictions from fishing expeditions, and for separating true predictions from “post-dictions” (the latter of which is extremely common in standard linguistic arguments too). Their commentary, to me, is not about statistics per se, it is about methodology that might seem convincing, but really isn’t. All this is not to say there is no place for post-hoc justification (surely, science needs it for new ideas). But, that is not to be confused with predictions and true testing.
On a related note: This might be interesting reading, to say the least. It’s a short editorial banning the use of NHST and most inferential statistics from their articles. Not an opinion I agree with - the call in my opinion should be for better and more careful stats,. But, to each their own. Maybe, this is the only thing that can be done to stop the obsession with p<0.05 rampant now, even in linguistic papers.
http://www.tandfonline.com/doi/full/10.1080/01973533.2015.1012991#abstract
@karthik
DeleteI agree with your reading. This problem is not limited to stat based analyses. However, it is important, IMO, to make this point in a stats context for there is the assumption that once one presents things with P-values or error bars then one has done something that is inherently better than what lazy linguists do. There is, in other words, a delight in the shape of things rather than an evaluation/judgment of the whole story. So, to debunk this supposition of saintliness it is worth having these kinds of critiques.
I also agree with the low hanging fruit issue. This is actually how I understood Rutherford as well. The right experiment does not need statistical massaging. The significance of the result is pretty clear. I recall David Poeppel telling me that this is what his stats prof told the class as well. A well designed study does not really need what he was about to teach them so when you think you need to deploy the armamentarium first consider rethinking/redoing the experiment.
Last point. One of the nicest features of cross linguistic work is that what is subtle in one language is not in another. Sometimes an interesting fact correlates with overt morphology, for example, making a judgment in a language easier for native speakers. As we are looking at UG as well as G this can be leveraged usefully in the study of other languages. So, often an alternative to statistically massaging some results in language 1 is to look for the same thing in a language where the judgment is clearer. This occasionally happens and when it does it's neat.
The most interesting thing I got from the Gelman-Loken paper was how hard it was to do things right. It's not good if the tools required to get a result are too hard for people to use correctly. I assume that this is not a necessary feature of this stuff, but it does seem to be a challenge. At least for the time being maybe the slogan should be "guilty till proven innocent" thus forcing those that use these methods to defend them up front as done right and being relevant. Or maybe "verify then trust."
Thx for the comment.
I am simply not sure of what the right strategy is moving forward, but I do agree with everything else you have said. The cross-linguistics studies to me are linguistics’ version of replication, with a slight twist - so I really appreciate them.
DeleteAlso, I should say this more often (following David Pesetsky, who in my opinion invariably points it out): Thanks for maintaining such an excellent blog, and having the energy/patience to respond to all the comments so promptly. I am sure even those who disagree with your opinion are able to appreciate the effort that goes into this blog!
Just so you know, this stuff does go to my head.
DeleteI agree that the cross linguistic studies are our analogue of replication (or maybe reproduction (there's some difference between the two apparently)). However, it may be a little more than that (hence your twist?). They are the analogue of picking the right model species or the right experimental set up to test a hypothesis. Some languages, for whatever fortuitous reason, make it easier to see some facts more clearly. For example, the redoing wh "movement" in the East Asian languages really enriched our understanding of the ECP and Islands. We had analogous data from multiple interrogative constructions in English, but frankly the data were rather precious and judgments none too clear. Apparently the same could not be said for the data in Chinese, Japanese and Korean. There we had pretty clear contrasts concerning wh-in-situ constructions. The same with WCO effects under scrambling in German and Japanese vs psych verbs in English. And my favorite is the data from the Italian dialects (Brandi and Cordin) on Rizzi's hypothesis that Italian actually obeyed the fixed subject condition. I am sure my comparative colleagues could sight more cases where what was obscure in language A was crystal clear in B. And when we are lucky enough to find this, we substitute what you earlier called "low hanging fruit" for subtle stats-needy data.
>>(or maybe reproduction (there's some difference between the two apparently)). However, it may be a little more than that (hence your twist?)
DeleteYes :).
This is posted for Mark Johnson who, no doubt for all the right reasons (here that NSA!) is having endless trouble posting on this blog. At the rise of being an accomplice to I know not what, I post here for him. Here's Mark:
ReplyDeleteI also second the thanks for maintaining this blog!
I realise what I'm about to say is likely to annoy everyone in the Piraha debate, but here goes anyway.
I suspect that the only reason why we see recursion in syntax is because our Language of Thought (or whatever you want to call it) provides us with recursive thoughts. But there are ways of expressing recursive thoughts that don't require recursive syntax, and maybe that's what's going on in Piraha.
For example, sentential anaphora permits us to express a single complex thought using several simple sentences. "Sam suspects Sasha thinks Sandy hates Alex" can also be expressed as "Sam suspects something. It is that Sasha thinks something else. It is that Sandy hates Alex". So we can express an arbitrarily deeply embedded thought via a sequence of sentences with only depth 2 clausal embedding by using sentential anaphora.
It could even be a cultural issue as to whether you prefer to express your recursive thoughts using syntactic recursion or other devices such as anaphora. (It's not a property of a language -- English lets you use both syntactic embedding and sentential anaphora).
I'd expect all languages to be able to express recursive thoughts somehow, e.g., using sentential anaphora. But of course there are perfectly reasonable thoughts that seem to be ineffable in English, and it's possible that the set of ineffable thoughts varies from language to language. I'd be surprised if the Piraha couldn't conceive of thoughts with arbitrary depth of recursion, though, as recursive thoughts seem central to e.g., the theory of mind.
Norbert meant to write "hear that NSA!". Though they probably are here as well.
DeleteI don't see why this would anger some of us. I for one am fine with the observation. I think that Everett does like to think that recursive sentence structure is required for fancy thinking. I have no dog in that fight. I think that even if Piraha speakers when speaking Piraha are complex clause limited this has little to tell us about the structure of Piraha speakers' FLs. I am pretty sure that theirs is like ours as they can learn and use Brazilian Portuguese. As you note, there are ways of expressing recursive thoughts using inter-clausal anaphora. WHy they decide to do this, if they do, is (possibly) an interesting question. But it says nothing about Chomsky Universals, contrary to what Everett seems to believe (despite many many attempts at straightening him out). But we all know why that is (see Mr Sinclair).
The problem with using inter sentential anaphora is negation. Think of 'Sam didn't think that Mary was happy'. Can't be paraphrased. Everett says these aren't expressible in Piraha.
DeleteI might be missing something obvious, but what's wrong with
DeleteMary was happy. Sam didn't think that.
There is one difference. 'Sam doesn't think that Mari is happy' does not imply that Mary is happy. Your two sentence analog does.
DeleteDavid, say you are right, then if Everett is reporting correctly (always an open question IMO) then doesn't this support Mark's suggestion?
If that's the sole issue, I don't see it. We can obviously modify the 'force' of our utterances in a discourse. So, somewhat more explicitly:
DeleteHere's a possible thought. (It might be true. It might not be true.) Mary is happy. Sam does not think this.
Actually, Davidson proposed this kind of "anaphoric" analysis in his "On saying that". But I think we all agree this whole "recursion"-issue is somewhat of a red herring, no?
I hope we do. It is for me.
DeleteI think so too.
DeleteI should add that iven what we believe about binding thoughts like 'no one believes he is going to win' will also be hard to code. Paraphrases like 'he is going to win. Nobody believes that' don't quite work. That said thinking of these cases is interesting for IF it were true that anything effable using embedding were sayable without this would arge against an obvious functional expkanation for why such embedding is grammatically ubiquitous.
Delete@Benjamin don't think your example works though as you still need to embed the 'Mary is happy proposition' under a truth predicate. Might be doable via evidentiality but Piraha doesn't have that. And the binding cases cases Norbert mentioned are also relevant. If you think of the various cases where DRT applies special technology, that generates a whole pile of examples. If a cat miaows, it's hungry; every cat that miaows is hungry; john didn't think a crocodile was in the water; no one who speaks Piraha also speaks Portuguese. Etc. to get these meanings you seem to require conjunction, which Everett says is lacking, and cross clausal scope for binding, which is also meant to be lacking. More crucially I think they'd violate his cultural immediacy principle. So if they are lacking, there's a real effability issue, even given paratactic periphrasis.
DeleteFair enough, I'm not at all trying to make Everetts's claims reasonable, nor to defend any of his particular linguistic analyses. But I still find the idea that "syntactic recursion" is somehow crucial still unconvincing. Even binding examples can be made to work, no? What's wrong with
DeleteConsider all Piraha speaker. None (also) speak Portugese.
I'm inclined to think that even donkey-sentences could be made to work (though I might be wrong), but I don't think any of this is really important.
Isn't the substantive point simply that one way or another, either the Piraha have the ability to use the kind of "recursion" Everett is so vocal to claim that they lack; or there are very basic thoughts they would be unable to express in their language? Which is quite a convincing reductio of the claim.
As someone's modus ponens can be someone else's modus tollens the fact that I think it's absurd and you do and Dave does will only be taken as evidence of an amazing breakthrough by DE. A Sapir-Whorf discovery of the highest order. He already seems to think the Piraha have an odd coneption of time, so why not this?
DeleteI think there is another important issue. even if DE is right, which I think is very very unlikely, his conclusions regarding Universals is wrong. It's based on a pun, a simple equivocation. He fails to distinguish two different conceptions of universal. He makes the Rosan Rosanna Dana error, but with him it's not a joke (though he is laughing all the way into the popular media, shame on them!).
It pains possibly not just me to read these guessing games about Piraha. Dan Everett has addressed some of Norbert's concerns on his new blog and allowed me to post the link for your [pl] education:
Deletehttp://daneverettbooks.com/chomskyan-vs-greenbergian-universals/#comments
I urge everyone to read the DE link that CB provides. DE really does not know what a Chomsky Universal is. Chomsky Universals are properties of FL/UG, not of Gs. It describes, in other words our capacity for language. To say that recursion is characteristic of FL is to say that humans have the capacity to acquire Gs that are hierarchically recursive. It does not say that every G necessarily displays this property (though, let me say again, that I really do not believe DE's characterization of Piraha. Asked to bet between him and David Pesetsky, Cilene Rodrigues and Andre Nevins, I will take the last trio over DE anytime). So do Piraha speakers have different FL/UGs than English or Portuguese or French or Swahili or Inuit or…speakers do? Nope. How do we know? Because we know that they can acquire and speak Brazilian Portuguese. How do we know? Because they in fact have been known to do this. As I've been saying for three years at least, but to no avail: DE DOES NOT KNOW WHAT A CHOMSKY UNIVERSAL IS. Period. Nor will he ever learn, for obvious reasons. Read his blog entry and see.
DeleteNote, none of this addresses what seems to concern DE; whether or not Piraha G generates structures via Merge. It simply notes that whether or not they do (and again, see David Pesetsky and company on this) is not relevant to the question of what FL can and cannot do.
So is there any reason for thinking that Piraha Gs generate structure using merge? Well there is a conditional argument. It goes like this: Given that Piraha speakers can use a mergish G why wouldn't they use such an operation in acquiring Piraha Gs? But the products of Piraha are not unboundedly long or deep DE says. Ok does having merges Gs IMPLY that the products must be so? Nope. You can intersect a mergish G with filters that limit output to whatever finite degree you wish. Hence the absence of unbounded hierarchical structure in Piraha, should this be a fact (which, again, let me say that I strongly doubt) does not imply that speakers of Piraha do not have mergish Gs. Are there arguments in favor of generating even finite "languages" using recursive rules plus filters? Sure, it is possible to provide very compact descriptions of the Gs in these terms and compactness can be a useful property to have (see Stabler on this). So, even wrt Piraha DE has not made the case the it does not employ mergish Gs.
So, DE does not understand what a Chomsky Universal is, nor the logic of UG nor the point that finite products do not imply non recursive rules. It really is depressing to see again and again. But in case you were wondering, CB has sealed the deal for us all by quoting the man himself. For those interested in seeing this point made again and again and again a long time ago see my first blog entry on FoL and go and take a look at the comments section of the Chronicle piece on DE. Me and a few others go over this point endlessly.
"But as I have said the Chomskyan view renders ... Merge untestable....most of my work on Piraha .. has been to show that the predictions Merge makes are all falsified."
DeleteSo, Merge is untestable, but he's falsified it. Quite an incredible achievement! Note, all the critical comments Christina has below the blogpost. Oh wait, ....!
Fascinating. So, according to Norbert, "whether or not Piraha G generates structures via Merge ... is not relevant to the question of what FL can and cannot do. In other words, what the G of the language one investigates actually generates is irrelevant to what FL can do. I am glad we have established that the study of individual Gs tells us nothing about what FL can do [what holds for Piraha Gs holds for English or German or Italian Gs, right?] Just hard to comprehend why David Pesetsky et al. were so outraged that Vyv Evans attributed such a view to Chomsky...
DeleteSome have trouble with modal reasoning. What X does is not the same as what X can do. That L is not recursive does not imply that recursion is not a deep property of FL. That I drive my car at 50 does not mean that it cannot drive at 90. It seems that not only DE has problems with easy concepts. Of course studying individual Gs can tell us a lot about some things. But not about all things. That English does not have long distance anaphors does not imply that FL has no LDAs. That English does not have multiple WHs in CP does not imply that this is not possible. We are interested in the properties of FL and study them via the properties of Gs. But the fact that some G lacks a property need not tell us much about FL/UG.
DeleteOf course, I would never concede that Piraha does lack this property. As I said, I am far less convinced by DE's discussion than that of others. These elementary points have eluded DE, and it appears some of his champions.
Splendid what Norbert reveals while not talking to me. Lets review a few points:
Delete1. "I really do not believe DE's characterization of Piraha. Asked to bet between him and David Pesetsky, Cilene Rodrigues and Andre Nevins, I will take the last trio over DE anytime"
One has to let this sink in: Everett has studied Piraha for 30 +years. The trio knows probably as much Piraha as I do Italian. So, according to Norbert, we should take the judgment of someone who defends our favourite framework over that of someone who studied the language. It seems to follow that in case someone does not like Luigi Rizzi's analysis of Italian: they can just ask Christina for an alternative [or maybe Paul Postal if you prefer a linguist who is unfamiliar with the language]
2. "We are interested in the properties of FL and study them via the properties of Gs. But the fact that some G lacks a property need not tell us much about FL/UG."
Interesting. So you study Gs to learn about UG. But from the fact which properties any given G has nothing follows about UG. And presumably the upshot is that knowing your most recent definition of UG won't help me to make any prediction about the G of the language I study - because my language could lack every property so far attributed to UG.
3. Even though any given G could lack any property of UG someone as unfamiliar with a particular language as the trio is with Piraha just knows which property the Piraha G must have. Presumably this is the abductive instinct at work?
4. Given that any G can lack properties of UG one has to wonder what makes UG universal? Rrrright, its part of our biological endowment. And we believe that without any evidence from neuroscience or genetics because Chomsky said so. Or Norbert and Cedric: "The only thing that makes something an I-universal on this view is that it is a property of our innate ability to grow language (Hornstein & Boeckx, 2009, 81) - circular reasoning at its finest.
Self parody is such fun to watch. I cannot possibly make a comment more persuasive than your last reply. Thx for making my case so perfectly. Adieu.
DeleteI am glad I could assist in your self-parody, Norbert :)
DeleteThis comment has been removed by the author.
DeleteCB: One has to let this sink in: Everett has studied Piraha for 30 +years. The trio knows probably as much Piraha as I do Italian. So, according to Norbert, we should take the judgment of someone who defends our favourite framework over that of someone who studied the language. It seems to follow that in case someone does not like Luigi Rizzi's analysis of Italian: they can just ask Christina for an alternative [or maybe Paul Postal if you prefer a linguist who is unfamiliar with the language]
DeleteBut of course we can ask Christina or Paul for an alternative to an analysis by Rizzi, and if it's cogent, of course it should be taken seriously. The whole point of publishing a paper is to externalize our expertise, by making our data, arguments, and conclusions transparent — thereby placing them at the service of anyone else who wants to rethink our analysis. That is exactly the process that Cilene, Andrew and I engaged in in our published responses to DE's claims, reviewing as comprehensively as we could his data, analysis and arguments, such as they were at the time, reaching conclusions that do not have to be rehearsed again here. I too will say adieu, as I don't want to rehash the Pirahã business, but this seemed like too important a point to leave without a response.
Huh, this is actually marginally interesting, I think. When did Everett start talking about Merge? Last I had seen, he was still fixated on embedding.
DeleteFor example, to quote from some of Everett's collaborative work with Steven Piantadosi, Laura Stearns, and Ted Gibson, Everett assumes that recursion is "self-embedding of a syntactic category, thus allowing for an infinite number of sentences". The quote comes from page 6 of slides that were presented by Ted Gibson at the LSA meeting in January of 2012.
This, of course, is already discussed by Nevins et al. (2009), and I agree with the sentiment already expressed here that there is probably no point in rehashing this.
But what seems marginally interesting, here, is that Everett has changed his tune and is actually talking about recursion of the operation Merge in the blog post that Christina linked to. I think it's interesting because, if you look at the slides presented at the LSA meeting, Piantadosi et al. found some evidence for syntactic embedding (i.e., Everett's old working definition of 'recursion') in topics and repeated arguments of Pirahã (see pp. 29-33 of the slides).
Two further things make this even slightly more interesting: First, this collaborative work was presented in January of 2012. Though it seems that only Ted presented the slides, it is collaborative work with Everett; his name is on the slides. And more than a year later, on July 3, 2013, at the Summer Linguistics Institute there was a film screening of The Grammar of Happiness and a Q&A session with Dan Everett. From what I remember, there was absolutely no mention of this by Everett in the Q&A. I honestly cannot remember if recursion was explicitly discussed or not, but it seems disingenuous for Everett to continue making out as if there is no evidence for syntactic embedding ('recursion') in Pirahã, especially when that was the exact topic of the film that was screened. (Perhaps there are FoL readers out there who were at the screening, too, but remember it better than I do? Did recursion explicitly come up in the Q&A session?)
Second, if you look at an old version of Steven's website from October 2014 and scroll down to the bibliographic information for the LSA talk, you will see that it says "Paper in progress" next to the talk. (The link makes use of the WayBack Machine to see a snapshot of his website from that month.) On the other hand, if you look at today's version of Steven's website, it seems the paper is no longer in progress nor has it been published anywhere. Of course, it's possible that it still is in progress, has been submitted, or is under review, or something. But, Steven seems pretty meticulous about listing all of his papers that are in press, submitted, or under review, and it does not show up. Curious.
So, three years later, why have these results still not shown up in print? (Granted, I am still too early in my career to know anything about publishing. But three years seems like a bit of a stretch, no?) And why has Everett changed his tune and is now actually talking about recursion rather than embedding?
Finally, I feel it's worth mentioning that one of Everett's criteria for falsifying the non-falsifiable Merge—good catch, Karthik!—is not strictly true. As Chomsky (2013) points out, structures need not be endocentric on the simplest conception of Merge (pp. 42-43).
H0
DeleteAlso, now that there is an intervening head—thereby preventing the following from getting to your head, Norbert—let me echo the thanks of Karthik and Mark! :P
I've really enjoyed this blog. Thanks for all your time and work that has gone into it! :)
It is somewhat surprising how much literary license David P. applies to what I wrote [vs. how he nailed Vyv to the exact letter of what Chomsky wrote]. As is evident from the bit quoted my point was not that other analyses should not be taken seriously but that it is curious that an analysis of a trio that is not familiar with the language ought to be taken OVER that of a person who knows the language - an entirely different matter.
DeleteI take it that your assurance indicates that the long history of ignoring entirely cogent [albeit critical of the Chomskyan paradigm] analyses by Paul is over now and am looking forward to reading your discussion of them in the near future. A reply to http://ling.auf.net/lingbuzz/002006 would be an excellent start.
@Adam: I would suggest that, if you are wondering what the status of someone's paper is, you just contact the author and ask [Steven has an e-mail address on his web-site], instead of hinting something fishy is going on. As most of us know: the publication business moves in mysterious ways and sometimes it does not move at all - just the other day I learned that an editor had forgotten about a paper I had submitted more than a year ago...
@Christina: Fair enough. Like I said, I don't know anything about publishing. And I have emailed Steven and am waiting on a response.
DeleteAll I was pointing out is that it's interesting that Everett is now actually saying that recursion is recursive Merge when he has been eschewing this definition of recursion for the last ten or so years. Also, don't you think it's disingenuous of him to not mention these results? As far as I know, he hasn't mentioned them in the last three years. If Piantadosi et al. are right, Pirahã does have embedding, which was Everett's definition of recursion until recently.
Literary license? None at all. You think that "it is curious that an analysis of a trio that is not familiar with the language ought to be taken OVER that of a person who knows the language - an entirely different matter". I think it would be curious to do anything else, if the analysis of the trio deals with the same problems and is the better of the two analyses. If it's worse, then make the opposite decision.
DeleteOut of curiosity: is there a theory-independent way to determine which one is the better analysis?
DeleteI just got a response from Steve(n)—he signs his emails as Steve—and it sounds like the paper is still in progress. He expects that it might be done in a month or two. So it should be interesting to see what the paper has to say.
DeleteIt seems that there is no upper bound on sentence length in Pirahã. So it's interesting (and perhaps unsurprising) to see that Everett has now changed his definition of recursion to be recursive Merge rather than syntactic embedding.
@Christina 1 posting up: in principle yes, if one analysis within its theory can accomodate more additional data with less trouble than the other (within its theory). In practice, not necessarily.
DeleteThanks Avery. This was of course the problem I was getting at: David P. prefers the analysis of Nevins et al. because it vindicates the framework he prefers. Someone who prefers a different framework may prefer another analysis [which does not have to be Everett's]. Now if, by definition, an analysis of someone who has no familiarity at all with the language under investigation can be just as good as that of someone who is intimately familiar with that language, there seems no objective way to adjudicate [because presumably your framework also determines what counts as data and what does not]. At this point one could just agree to disagree but that is not what David P. suggests - he insists that everyone accepts the superiority of the analysis he prefers...
DeleteDavid P. prefers the analysis of Nevins et al. because it vindicates the framework he prefers.
DeleteNo, that's not the reason. Re-read what I wrote above and re-read the article under discussion. My final comment on this here.
Since you refused to answer whether there is a theory independent way to decide which analysis is better, there is no need for me to re-read the two Nevins et al. articles. They did not convince me in the past because, unlike you, I consider it at least possible that your framework is wrong.
DeleteIn general much of the recent biolinguistcs debates remind me of the attitude Chomsky was criticizing here:
In fact you attitude reminds me of what Chomsky is criticizing here:
"...advocates of these notions often do not formulate them with sufficient clarity so that there could be disconfirming evidence, and in the face of...[counterexamples] simply reiterate their hypothesis that some adequate theory can be developed along the lines they advocate. Such a 'hypothesis' is not to be confused with explanatory hypotheses... the term prayer might be more fitting than 'hypothesis'. Note that suspension of judgment with respect to apparently intractable evidence is a reasonable, in fact necessary, stance in rational inquiry, but there comes a point when it becomes irrational, particularly when alternative and more adequate theories are available." Chomsky 1975 243-4
Except by now his 'theories' are the ones formulated with insufficient clarity and simply reiterated [and bolstered with some theory internal reanalysis of empirical findings] in the face of counter evidence ...
A fun XKCD comic about p-values.
ReplyDeleteLong term reorders of Norbert's blog no doubt notice an interesting trend: a couple of years ago main target of criticism were articles published in top-journals. By now much focus is on popular books [Evans' The Language Myth], on-line comments or informal interviews [Everett above]. Presumably this indicates that critics of Chomsky are now also justified to conclude what he does or does not understand based on interview-volumes - what's right for the goose...
ReplyDeleteAnother question arises. Norbert writes:
"However, do note expect DE to ever loose this misunderstanding. As Upton Sinclair once noted: “It is difficult to get a man to understand something when his salary depends on his not understanding it.” What do you think the odds are that DE would be getting interviewed here or featured in the New Yorker or the Chronicle of HigherEeducation were he not peddling the claim that his work on Piraha showed that Chomsky work in linguistics was incorrect? Do I hear 0?"
If criticizing Chomsky is so profitable one imagines defending him generates even greater rewards. If Norbert does not wish his readers start thinking along those lines he may want to remove the insulting remark on Everett's motive...
This comment has been removed by the author.
ReplyDeleteRe the genes and language piece, I don't think you were the target audience. This kind of research has been around since the 80s if I recall, showing that basically languages hew pretty closely to one genetic line and human breeding hews pretty closely to people who speak the same language. It always stuck with me as pretty surprising and one of those incredibly useful things to know when you're studying the history of human migration. It's of no immediate use unless you're asking questions about that as far as I can see.
ReplyDeleteLess immediately, though, historical relationships between languages _are_ relevant when you're studying Greenberg universals, as a nuisance factor that you need to get rid of (and you therefore should have a good model of). This kind of result means in principle you should be able to (carefully) plug in genetic data to improve your model when you're on the search for surfacey universals.
And the presupposition that something like this was true was what lent credence to this week's _other_ historical linguisitcs story, about Indo-European origins (http://news.sciencemag.org/archaeology/2015/02/indo-european-languages-tied-herders).
As a less sincere point of rhetoric, I find it amusing that DE's blog post cited by CB in the comments was published the day after this post and in it he throws back the exact same Upton Sinclair quotation that you used above. Perhaps he is a regular reader and is at least influenced to take some of your ideas. I would still take you both to task on it, however! Jibes at other people's intelligence can be fun, but when does a discussion ever profit from imputing nefarious motives? And if such accusations about money are true, or even just sincerely believed, then you're surely both making fools of yourselves by spending time constructing arguments where only bribes have power. In any case, as far as speculative psychoanalysis goes, it looks to me that linguistics is fought on the battle-ground of ego and self-pity much more than financial gain...
ReplyDeleteGlad to hear he might be reading FoL. Maybe it will help him get the difference between Chomsky and Greenberg Universals. But as for your second point, I am not suggesting he is dumb. I am suggesting the opposite. He must understand the distinction because it is not that hard to understand. He might not like Chomsky Universals and think they don't exist, but that would require argument. That said, I am sure DE gets the distinction. But he cannot now afford to internalize this. Why? Because the only possible reason that DE is a media star is that he is the anti-Chomsky. I know this. He knows this. Indeed, everyone knows this. Nobody really cares about Piraha grammar outside of linguistics. What the New Yorked, and the Chsonicle cared about was that Piraha purportedly showed that CHOMSKY WAS WRONG. That's what's given DE his current fame. And he knows it. So given that, do you really think he is about to throw it all away by engaging with the central issue? Is he going to say, sorry, the whole discussion was base on a pun? Nope. Won't happen. So, the relevance of Mr Sinclair.
DeleteSo I believe that everything is a function of money? No, there are other lures as well, fame being one, ego another, and many others I am sure. However, I don't think that this is always a temptation, just that it often is. And it makes sense to ask what's going on if after several years of making the same points nothing changes. Look, I think that Chomsky is right to think that there are his kind of universals. I try to find reasons for this. DE does not. He simply continues to misinterpret. Where his views getting no attention, I would ignore them. His claims are really not worth debating. They are BORING. But given that they do get traction for the reasons I mention (there really is a bad case of Chomsky derangement syndrome out there), I feel I must. Oh the cross I bear! It's no fun. Wo is me! Wo is me!
That said, in general, I think you are right. But maybe DE is the exception that proves the rule (whatever the hell that means).
I think DE understands the idea of Chomsky Universals just fine ... what he lacks is faith that present mathematical methods allow them to support any empirical claims about what we can expect to observe in language (basically, Greenberg Universals). For the reason that, whatever the basic principles say, there's a very ill-defined set of auxiliary hypotheses that you can use to fit an indeterminate range of unexpected, unpredicted phenomena into (such as the suprising absence of any branching modifiers in side Piraha NPs, which I don't think anybody has challenged yet).
DeleteI think he's essentially correct in this as an observation about now, but diverge from his view in that I think that current theoretical and mathematical work is actually making progress towards fixing this, at least to the point that some anomalies can be characterized as less unexpected than others (the former being the ones with 'nicer pigeonholes', to adapt the terminology from Pesetsky's Russian Case book).
"I am sure DE gets the distinction. But he cannot now afford to internalize this. Why? Because the only possible reason that DE is a media star is that he is the anti-Chomsky. I know this. He knows this. Indeed, everyone knows this. Nobody really cares about Piraha grammar outside of linguistics. What the New Yorked, and the Chsonicle cared about was that Piraha purportedly showed that CHOMSKY WAS WRONG. That's what's given DE his current fame. And he knows it. So given that, do you really think he is about to throw it all away by engaging with the central issue? Is he going to say, sorry, the whole discussion was base on a pun? Nope. Won't happen. So, the relevance of Mr Sinclair."
DeleteI admit I owe Norbert an apology. Each time I think his attacks could not go any further below the belt he shows they can. One has to wonder: why this fixation on Dan Everett? Is he the only or even the most serious threat to Chomsky's framework? Hardly. Why would any serious scientist [and Norbert purports to be one] pay so much attention to what is said in 'the media' about Everett's work [or launch an all-out war on a popular book like Evans' "The language myth"] instead of engaging with criticism published in linguistic journals [and on LingBuzz] by syntacticians? Why did he not pick up when David Adger dropped the ball and demonstrated why the minimalist analysis of one of his pet examples is not inferior to the analysis I suggested?
Minimalists [Norbert smog them] have claimed that Everett's findings do not matter one way or the other: if his analysis is incorrect they do not matter for that reason - if his analysis is correct they do not matter because the 'recursion [properly understood] is just one tools among many and not every language uses all tools'-defence kicks in. So why does Norbert still go on and on about Everett and does not deal with the far more serious threats to minimalism? Maybe this carefully orchestrated Everett Outrage is just a smoke screen to distract from the fact that the Minimalist Program is deeply flawed and that none of the "undeniable accomplishments of the last 60 years" stand scientific scrutiny?