1. Alex Drummond sent me this link to a nice little paper on what appears to be an old topic that still stumps physicists. The chestnut is the question of whether hot water freezes more quickly than cold. The standard answer is "you gotta be kidding" and then lots of aspersions are cast on those that think that they have proven the contrary empirically. Read this, but what's interesting is that nobody ever thought that the right answer was anything but the obvious one. However, experiments convinced many over centuries that the unintuitive view (i.e. that hot water does freeze faster) was correct. The paper reviews the history of what is now called the "Mpemba Effect," named after a high school student who had the courage of his experiments and was ridiculed for this by teachers and fellow students until bigger shots concluded that his report was not nuts. Not that it was correct, however. It turns out that the question is very complex, takes a lot of careful reasoning to make clear and turns out to be incredibly hard to test. It's worthwhile reading for linguists for it gives a good taste of how complex interaction effects stymie even advanced sciences. So, following the adage that if it's tough for physics don't be surprised if it't bought for linguistics, it's good to wallow in the hardships and subtleties of a millennial old problem.
2. Here's a recent piece on how hard it is to think cleanly in the sciences. None of it is surprising. The bottom line is that there is lots of wiggle room even in the best sciences for developing theories that would enhance one were they true. So, there is a strong temptation to find them true and there are lots of ways of fudging the process so that what we would like to be the case has evidence in its favor. I personally find none of this surprising or disheartening.
Two points did strike me as curious.
First the suggestion that a success rate of 15% is something to worry about. Maybe it is, but what should we a priori believe the success rate should be? Maybe 15% is great for all we know. There is this presupposition that the scientific method (such as it is) should insulate us from publishing bad papers. But why think this? IMO, the real issue is not how many bad papers get out there but how many good ones. Maybe an 85% miss rate is required to generate the small number of good papers that drive a field forward.
Second, there is the suggestion that this is in part due to the exigencies of getting ahead in the academic game. The idea is that pressures today are such that there is lots to gain in painting rosy research pictures of ever expand revolutionary insight. Maybe. But do we really know if things were better in more relaxed times when these sorts of pressures were less common? I don't know. It would be nice to have a diachronic investigation to see that things have gotten worse. Personal anecdote: I once read through the proceedings of the Royal Society from the 17th and 18th centuries. It was a riot. Lots of the stuff was terrible. Of course, what survives to the present day is the gold, not the dross. So, how do we know that things have gotten worse and that the reason for this are contemporary pressures?
That's it. Science is hard. Gaining traction is difficult. Lots of useless work gets done and gets published. Contrary to scientific propaganda, there is no "method" for preventing this. Of course, we might be able to do better and we should if we can. But I for one am getting a little tired of this sky-is-falling stuff. The idea seems to be that if only we were more careful all problems could be solved. Why would anyone believe this? As the first paper outlines, even apparently simple problems are remarkably difficult, and this in areas we know a lot about.