FIrst, for those interested in the "replication crisis" in science (well, not "real" science, but the wannabes like psych and the other social sciences), here is a good discussion by Gelman. I think I've said this before, but his blog is site interesting especially on matters statistical. And though I am no expert on these issues, I am told that he is a big deal. I do know that his posts always get me thinking (even if I sometimes disagree especially about Hauser (against whom he takes another quick shot here)).
What I like about this post is his distinguishing between the statistical significance of an effect (even if done right) and, and what is really of moment, the "size, direction and structure of …effects." The hard part of doing science is finding that thing that is worth studying and that can be studied. What this is changes over time. As our knowledge deepens and our methods improve, things that were beyond the each of rational inquiry become amenable to scientific inquiry. However, lots of "research" ends up studying either what's not there or that cannot be sufficiently isolated so as to be studyable given our methods. As Gelman puts it:
But . . . all the analysis and replication in the world won’t save you, if what you’re studying just isn’t there, or if any effects are swamped by variation.So the trick is to find the right problem, at the right time, given what we know and what we can do. And one problem with stats, IMO (and maybe Gelman's) is that it provides a technology that obscures this when misused AND, it appears, that the tools are easy enough to misuse. Stats are a little like Kabbalah; it's only safe in the hands of the wise.
This is not quite the moral that Gelman draws. He wants us to learn to use these techniques correctly. But if I understand his post, the problem is with the subject matter not the techniques. There's a lot of small interacting causes out there underlying "behavior." That's why behavior is probably not worth studying (why? because we suck at studying interaction effects even in "teal" sciences). We need to study natural systems, but identifying these is no trivial matter.
We in linguistics are lucky. It is not hard to pre theoretically identify a domain that is relatively context invariant. This is why, IMO, linguists can get somewhere. So too people working on faces, or songs in birds or approximate numbers or early vision or… But this is not the norm. These are areas where we can move from studying behavior (what we see everyday) to studying the natural cognitive systems related to this behavior. This is effectively Jerry Fodor's old point: we can get somewhere in a science to the degree that its object of study is modular. The less the modularity the smaller the chance that it can be usefully studied. I think that Gelman is making a very similar point.
Second, there this inflammatory headline to an experiment whose value eludes me (here). It's yet another demonstration that syntax is not unique to humans. Apparently Japanese great tits can combine "words to generate novel meanings." Thus, they too have syntax. Of course, as you read the article you will see that their syntax is really nothing like ours and that their words are nothing like ours and that what they do is the palest possible shadow of what we do. Nonetheless, the conclusion is that what we do and what the birds do is effectively the same thing so we can come to understand human linguistic capacity using the bird models. Or as one of the co-authors puts it:
'This study demonstrates that syntax is not unique to human language, but also evolved independently in birds. Understanding why syntax has evolved in tits can give insights into its evolution in humans', says David Wheatcroft, post doc at the Department of Ecology and Genetics at Uppsala University and co-author of the study.
Really? The bird syntax described is embarrassingly primitive. There is no hierarchical structure to speak of. In fact, there are only examples of two "word" "sentences" (yes, scare quotes). They are linearly restricted in that A-B works but B-A doesn't. From the examples, it seems that the order of the "words" reflects the order of the instruction. So, "scan-the-surroundings" + "approach" is ok while "approach" and "scan-the-surroundings" leave the bird unmoved (actually, if this is really compositional it raises an interesting question: why? After all, there is nothing incoherent in the second pair of commands, yet the bird does nothing). At any rate, that's it!
This is really of barely any interest when it comes to what humans do. In fact, it is quite a bit less interesting than what dolphins (or porpoises, I can't recall which) were taught how to do long ago (there's a very long paper in Cognition when it was still worth reading). I uses to teach this in my language and mind course for undergrads to demonstrate what human syntax was like. This was the foil. Here the animals learned 4-5 "word" sequences that instructed them to do things in a certain order. They succeeded. They mostly broke down after 5. There was no hierarchy here either. And no obvious recursion. At any rate, it was more impressive than what was reported here, and equally irrelevant to the human language issue.
Let me back up a bit: it would be interesting to show that animals other than humans could combine signs in some way so as to relate an articulation with an interpretation in a complex way. Were birds able to do this, that would indicate that bridging the articulation-meaning gap was not restricted to humans. However, this would require being able to show compositionally. But as I noted, this is not clear from the examples provided (maybe the paper did a better job, and if so, and if someone reads it, feel free to report back). In fact, it is not clear that these combos mean anything at all. Here's what the article says:
This small bird species experiences a number of threats, and in response to predators, they give a variety of different calls. These calls can be used either alone or in combination with other calls. Using playback experiments, Dr. Suzuki and colleagues could demonstrate that ABC calls signifies "scan for danger", for example when encountering a perched predator, whereas D calls signify "come here", for example when discovering a new food source, or to recruit the partner to their nest box. Tits often combine these two calls into ABC-D calls such as when approaching and deterring predators. When these two calls are played together in the naturally occurring order (ABC-D), then birds both approach and scan for danger. However, when the call ordering is artificially reversed (D-ABC), birds do not respond.
Note the combo is used in both approaching and deterring predators. How does that make sense if the meaning is as indicated? Also, how do we know that these are not two calls in sequence rather than a combo call? But these are small issues. The larger one, the one that the article makes hay of is that this is like human syntax. Not on your life. It's embarrassing that a journal like Nature Communications (part of the Nature brand!) doesn't seem to know anything about human language, like, for example, that hierarchy is the main organizing principle (and not linear position). I have nothing against noting the amusing cute features of birds (I love the Nature channel and am a card carrying member of the Dr Doolittle society). But this is clearly not what the authors or the journal or Phys.org finds interesting. It's the light this purportedly sheds on human syntactic capacities that's the star of the show. And if this is so (and it is) I have bad news for these people: IT WILL TELL US NOTHING AT ALL ABOUT HUMAN SYNTAX. Listen: we need to start talking to these people. They really need us, not the least to protect the world from doing pointless experiments that, I am sure, take up lots of time and talent.
Third, for those who missed this piece in the Onion, take a look here. I admit it: I would go to Vegas for this kind of entertainment.