I would say sorry for the long new-post hiatus, but I'm not really. I was traveling in Europe for no better reason than enjoyment and simply did not have the time to fit in anything intelligent (I know this leave me wide open; C take your best shot) between the cultural gawking, overeating and excellent wine consumption. My sybaritic life, however, did leave time for reading and here are two tidbits I found interesting:
1. On the physics envy front: It’s interesting to see physicists debating the role of “naturalness” in physical explanations (thanks to Ewan for the first link) (here and here). Naturalness is a predicate of theories; those that have them gain in comprehensibility and predictability. So, if one’s desire is for theories that not only explain what happens but why things are the way they are, then natural theories are for you.
Within linguistics, minimalists also pine for natural theories. I believe that we have some hints of what natural formal universals might look like, though our stories are far from complete. There are also some serious challenges, the biggest being the status of substantive universals within FL/UG (see some discussion here). Put crudely, from where I sit right now, the kinds of universals that Cartographers have isolated (and presented rather good evidence for) seem unlikely to have deeper conceptual anchors. In this they will (faintly) resemble the discussion of constants discussed in the two linked pieces above. The issue is the degree to which the constants we need to make our stories go can be deduced from the principles we employ. In physics, the question revolves around deducing the mass of the Higgs Boson. Here’s a flavor of the problem in physics:
“Naturalness has a track record,” Arkani-Hamed said in an interview. In practice, it is the requirement that the physical constants (particle masses and other fixed properties of the universe) emerge directly from the laws of physics, rather than resulting from improbable cancellations. Time and again, whenever a constant appeared fine-tuned, as if its initial value had been magically dialed to offset other effects, physicists suspected they were missing something. They would seek and inevitably find some particle or feature that materially dialed the constant, obviating a fine-tuned cancellation.
This time, the self-healing powers of the universe seem to be failing. The Higgs boson has a mass of 126 giga-electron-volts, but interactions with the other known particles should add about 10,000,000,000,000,000,000 giga-electron-volts to its mass. This implies that the Higgs’ “bare mass,” or starting value before other particles affect it, just so happens to be the negative of that astronomical number, resulting in a near-perfect cancellation that leaves just a hint of Higgs behind: 126 giga-electron-volts.”
That’s not a near miss! In current minimalist syntax, the challenge to naturalness arises when considering the status, for example, of the Universal Base Hypothesis, especially the gloriously elaborate one advanced by Cinque. If these are part of FL/UG, as he argues in a forthcoming Lingua piece, then it presents a serious puzzle for those inclined towards “natural” minimalist explanations of syntactic structure.
2. I also ran across an interesting paper on the peer review process (here) that bears on some earlier posts of mine. It seems that some enterprising scholars resubmitted slightly revised versions of papers that already been published and got diametrically opposed evaluations. In one sense, this is not that surprising. It is not my experience that reviewer comments are known for their consistency. However, it appears that this paper among others is beginning to paint a rather dismal picture about the peer review process. The relevant quote (note the “good news”):
“If I did the probability theory right, the rejection of previously accepted papers is indistinguishable from the editors deciding to randomly accept papers with a twenty percent acceptance rate (with an acceptance rate of 20%, the probability of rejecting 9/9 papers is 13%, and the probability of rejecting 8/9 is 30%). I suppose the good news is that the study is too underpowered to detect a rejection rate definitively greater than would be expected randomly.
At this point, the only purpose of peer review that I can see is weeding out much of the utter bullshit (though even that fails occasionally).”
I would love to think that the review process in linguistics is superior, but I doubt it. One saving grace is that the field is much smaller and so pulling the tick of resubmitting the same paper again would not likely go unnoticed. However, I doubt we do much better on the problem noted in the last paragraph pointing to a related article:
“… which describes how, at multiple journals, one reviewer claimed the results were impossible while the other claimed that ‘we already knew this.’”
I have recently read a similar pair of reviews of a colleague’s work reviewed at a very major journal. So, though I would love to be smug while dissing the psychologists, this time it may not be wise not to do so.