Comments

Sunday, June 9, 2013

I'm Baaaaacccckkkk!


I would say sorry for the long new-post hiatus, but I'm not really. I was traveling in Europe for no better reason than enjoyment and simply did not have the time to fit in anything intelligent (I know this leave me wide open; C take your best shot) between the cultural gawking, overeating and excellent wine consumption. My sybaritic life, however, did leave time for reading and here are two tidbits I found interesting: 

1. On the physics envy front: It’s interesting to see physicists debating the role of “naturalness” in physical explanations (thanks to Ewan for the first link) (here and here). Naturalness is a predicate of theories; those that have them gain in comprehensibility and predictability.  So, if one’s desire is for theories that not only explain what happens but why things are the way they are, then natural theories are for you. 

Within linguistics, minimalists also pine for natural theories. I believe that we have some hints of what natural formal universals might look like, though our stories are far from complete. There are also some serious challenges, the biggest being the status of substantive universals within FL/UG (see some discussion here). Put crudely, from where I sit right now, the kinds of universals that Cartographers have isolated (and presented rather good evidence for) seem unlikely to have deeper conceptual anchors. In this they will (faintly) resemble the discussion of constants discussed in the two linked pieces above. The issue is the degree to which the constants we need to make our stories go can be deduced from the principles we employ. In physics, the question revolves around deducing the mass of the Higgs Boson. Here’s a flavor of the problem in physics:

“Naturalness has a track record,” Arkani-Hamed said in an interview. In practice, it is the requirement that the physical constants (particle masses and other fixed properties of the universe) emerge directly from the laws of physics, rather than resulting from improbable cancellations. Time and again, whenever a constant appeared fine-tuned, as if its initial value had been magically dialed to offset other effects, physicists suspected they were missing something. They would seek and inevitably find some particle or feature that materially dialed the constant, obviating a fine-tuned cancellation.
This time, the self-healing powers of the universe seem to be failing. The Higgs boson has a mass of 126 giga-electron-volts, but interactions with the other known particles should add about 10,000,000,000,000,000,000 giga-electron-volts to its mass. This implies that the Higgs’ “bare mass,” or starting value before other particles affect it, just so happens to be the negative of that astronomical number, resulting in a near-perfect cancellation that leaves just a hint of Higgs behind: 126 giga-electron-volts.”

That’s not a near miss! In current minimalist syntax, the challenge to naturalness arises when considering the status, for example, of the Universal Base Hypothesis, especially the gloriously elaborate one advanced by Cinque.  If these are part of FL/UG, as he argues in a forthcoming Lingua piece, then it presents a serious puzzle for those inclined towards “natural” minimalist explanations of syntactic structure.

2. I also ran across an interesting paper on the peer review process (here) that bears on some earlier posts of mine.  It seems that some enterprising scholars resubmitted slightly revised versions of papers that already been published and got diametrically opposed evaluations.  In one sense, this is not that surprising. It is not my experience that reviewer comments are known for their consistency. However, it appears that this paper among others is beginning to paint a rather dismal picture about the peer review process. The relevant quote (note the “good news”):

“If I did the probability theory right, the rejection of previously accepted papers is indistinguishable from the editors deciding to randomly accept papers with a twenty percent acceptance rate (with an acceptance rate of 20%, the probability of rejecting 9/9 papers is 13%, and the probability of rejecting 8/9 is 30%). I suppose the good news is that the study is too underpowered to detect a rejection rate definitively greater than would be expected randomly.
At this point, the only purpose of peer review that I can see is weeding out much of the utter bullshit (though even that fails occasionally).”

I would love to think that the review process in linguistics is superior, but I doubt it.  One saving grace is that the field is much smaller and so pulling the tick of resubmitting the same paper again would not likely go unnoticed.  However, I doubt we do much better on the problem noted in the last paragraph pointing to a related article:

“… which describes how, at multiple journals, one reviewer claimed the results were impossible while the other claimed that ‘we already knew this.’”

I have recently read a similar pair of reviews of a colleague’s work reviewed at a very major journal.  So, though I would love to be smug while dissing the psychologists, this time it may not be wise not to do so.

4 comments:

  1. The original paper testing the review process (http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=6577844) apparently is from 1982! I am not sure that there are any indications that things have improved since then. At the same time, it is not clear what the alternative is.Just posting everything, indeed? But then who is going to do the weeding for us, poor readers? Such a system might lead to people sticking even more to the work of people they happen to know.

    ReplyDelete
  2. Yes, I agree that's a problem and I have no good ideas on how to solve it. One thing I do in other areas is find leading figures and read what they think is good. One can get insulated, but if you look for contrarians, it can help. What is less clear to me is that the journals are any different. It's not like the journals express a wide range of different opinions, and they can serve to stop a lot of interesting work from getting out. My own hunch is that the journals main function nowadays is to serve as vetting venues for tenure and promotion review, rather than the dissemination of new results. They are too slow for this latter task. They are "archival," and serve as hurdles to academic promotion. This is not a trivial function, but it is not how journals used to function.

    ReplyDelete
  3. What features would you want to see in this kind of website? I think at the very least you'd want to be able to categorize papers, tag papers with semi-critique (like "no data provided" or "invalid reasoning"), as well as upload auxiliary material (data, code, etc.) without putting it in the body of the work. Probably also you'd want some kind of voting system so that users can rate the paper, tho I would suggest that this can only happen if the person voting also leaves a comment explaining the vote, otherwise you get bullshit votes based on title/summary/author/etc.

    ReplyDelete
  4. A very new initiative is called the "Selected Papers Network";
    people have tried before with what were called arxiv overlay journals but for reasons I don't understand they never took off.
    Definitely this is about to happen, but exactly how or in what form is not yet clear.

    ReplyDelete