Thursday, September 19, 2013

Faking it

Andrew Gelman, a statistician at Columbia (and one whose opinions I generally respect (I read his blog regularly) and whose work, to the degree that I understand it, I really like), has a thing about Hauser (here).[1] What offends him are Hauser’s (alleged) data faking (yes, I use ‘alleged’ because I have personally not seen the evidence, only heard allegations, and given how easy these are to make, well, let's just try not to jump to comfortable conclusions). Here he explains why the “faking” is so bad, not rape, murder or torture bad, but science wise bad. Why? Because  what fake data does is “waste people’s time (and some lab animals’ lives) and slow down the progress of science.” Color me skeptical.

Here’s what I mean: is this a generic claim or one specific to Hauser’s work?  If the latter, then I would like to see the evidence that his alleged improprieties had any such effect. Let me remind you again (see here, here,) that the results of all of Hauser’s papers that were questioned have since been replicated. Thus, the conclusions of these papers stand. Anyone who relied on them to do their research did just fine.  Was there a huge amount of time and effort wasted? Did lab animals get used in vain? Maybe. What’s the evidence? And maybe not. They all replicated. Moreover, if the measure of his crime is wasted time and effort, did Hauser’s papers really lead down more blind alleys and wild goose chases then your average unreplicable psych or neuro paper (here).

As for the generic claim, I would like to see more evidence for this as well.  Among the “time wasters” out there, is faked data really the biggest problem, or even a very big problem?  Or is this sort of like Republican "worries" about fake voters inundating the polls and voting for Democrats? My impression is that the misapplication of standard statistical techniques to get BS results that fail to replicate are far more problematic (see here and here). If this is so, then Gelman’s fake data worries, by misdirection, may be leading us away from the more serious time wasters, i.e. it diverts attention from the real time sinks, viz. the production of non-replicable “results,” which, so far as I can tell is closely tied to the use of BS statistical techniques to coax significance in one out of every 20 or so experiments. We should be so lucky that the main problem is fakery!

So that I am not misunderstood, let me add, that nobody I know condones faking data. But this is not because it in some large measure retards the forward march of science (this claim may be true, but it is not a truism), but because faking is quite generally bad. Period (again, not unlike voter fraud). And it should not be practiced or condoned for the same reason that lying, bullying, and plagiarism should not be practiced or condoned. These are all lousy ways to behave.  However, that said, I have real doubts that fake data is the main problem holding back so many of the “sciences,” and claiming otherwise without evidence can misdirect attention from where it belongs. The main problem with many of the “sciences” is absence of even a modicum of theory, i.e. lack of insight (i.e. absence of any idea about what’s going on), and all the data mining in the world cannot substitute for one or two really good ideas. The problem I have with Gelman’s obsession is that in the end it suggests a view of science that I find wrongheaded: that data mining is what science is all about. As the posts noted above indicate, I could not disagree more.

[1] This is just the latest of many posts on this topic. Gelman, for some reason I cannot fathom, also has a thing about Chomsky, as flipping through his blog will demonstrate (e.g. here, and here).


  1. Convergence, the comments AG's blog beat you to this punch.

    (Summary - The quick reply was: aren't p-values just as bad? To which the reply was, yes, I suppose I'm actually considering more the intentions of the person doing it.)

    1. As I read them, they made exactly my point: they are not JUST as bad but a little worse CONSEQUENTIALLY. They are more ubiquitous and more insidious. Odd, it seems to me, that here AG goes all Kantian on us when his stock in trade is consequences.

  2. I think I agree with some of this, at least for the linguistics/psycholinguistics/cognitive science context.

    But I think it's bizarre to characterize any published data faking as a victimless crime, just because it's not immediately obvious that it didn't launch a raft of other erroneous work. I'm pretty sure a fair amount of money was supporting this research, and faking data is worse than flushing it down the drain. If there was legit data that did not support the investigated effects, that at least could be incorporated in a meta-analysis, which could be useful in evaluating the sum total of similar findings/experiments. If the data was not even collected properly enough to report at all, that's just shoddy work, and a waste of someone's money at best.

    I agree that "data mining" is not the end-all of science, and is perhaps barely science to begin with, but high-quality data is exceedingly precious, so adding data that you *know* is bad to the scientific record is pretty bad.

    I'm also not sure why you're so confident it didn't lead to any other wasted efforts. The file drawer problem is pretty common and insidiously hard to track. Not to mention the time, energy, and money spent in all the investigations, retractions, etc. As well as adding to the list of frauds in the press lately, which doesn't help with maintaining public faith in research.

    I just think it's hard to quantify what counts as "holding science back." Science is a very public endeavor, and depends on much more than just the competition of abstract ideas. So even if the ideas were somehow fundamentally correct, faking is damaging, and high-profile faking can result in a special kind of damage. I guess it just depends on exactly how you're quantifying wasted effort. I think you're probably right that well-meaning but erroneous/sloppy methods bring more false conclusions to light than people intentionally faking their data. Or at least I'd like to believe that. But wasting other scientists' time is not the only way to quantify impact on science, and I think scandals like Hauser's and others cast long shadows of a different kind.

    As far as the replications, frankly I'd be suspicious of those. If the effects were real and robust, why would Hauser needed to have faked his data? Maybe it was laziness or something, who knows. But I have to assume that journals would be interested in publishing replications of something that gained notoriety (think of the citations and impact factor!), so the run-of-the-mill problems of publication bias, etc. could easily be amplified in the wake of a Hauser or Stapel-like scandal.

    Finally, I think as social/cognitive scientists, we end up getting a fairly rosy view of the actual ethics of researchers. I would expect that as funding dollars go up, the odds of data faking also go up, at least in fields where it might be possible to get away with it. So while linguists may typically have little incentive to intentionally fake data or cook the stats in a way they *know* is wrong, I think the incentives are pretty high in the medical sciences. In order to really support your claim that intentional dishonesty does not do as much damage as more innocent methodological sloppiness, you'd have to figure out how to weigh the damages to public perception (i.e., perception outside of the narrow field that would normally read the paper) against other factors, and you'd have to establish which fields you're considering. After hearing some of Ben Goldacre's talks (highly entertaining, btw), data faking may be a much, much bigger problem in fields with much higher stakes than linguistics.

    1. I don't believe that I said it was a "victimless" crime. I said that if the problem in faking data is wasted time and money then we need evidence showing that time and money were wasted. So far, I see little in the Hauser case. What I do see is that the results have stood, so far as we know, and if that is so the "waste" is hardly evident.

      I also said, that whether or not faking data wastes time and money it's a crappy thing to do and that nobody I know has or would defend the practice. Not because it's wasteful but because faking things is a lousy thing to do, as is plagiarism, self promotion, and sloppiness.

      I then added that I suspect that fake data is not the biggest or most common impediment in science. In some sense, it is a very easy target and that the biggest problems come from misapplication of techniques in mindless ways. This appears to be what the metanalyses show as well as the replication rate in many of areas of the sciences is very very low. If the gold standard is replicable results, then that which hinders these is what we should focus on, even if it doesn't feel nearly as good as pointing out the obvious. In fact, pointing to fakery suggests that this IS the problem, but if it's not, then this is a disservice, abig one from the looks of things (i.e. if replicability is the measure).

      Last, I noted that Hauser's stuff has been replicated and that whatever the provenance of the results (if he gazed into a crystal ball and intoned the results) this is a big deal. I said before, that I take replicability to be a prima facie reason for thinking that an experiment has been done right. As this is hardly trivial in the mental and brain sciences, we should be careful to accuse people of bad hygiene if their results consistently replicate, as Hauser's appear to have been.

      So, on consequentialist grounds: show me the money; document the waste. I agree that this could be true, but it could also be true that it is a minor blip, not unlike voter fraud, and is thus more a distraction than a problem. That said, we can all agree that we should not fake data or much of anything else. It's dishonest and to be shunned on that ground alone, EVEN IF IT HAS FEW BALEFUL CONSEQUENCES IN THE ACTUAL PRACTICE.

    2. Thanks for the recap :-)

      Sorry I wasn't more concise about where I was trying to add to your points:
      - I suspect many fields have much more endemic faking problems. See Ben Goldacre for evidence/discussion about how many studies supporting drugs involve knowingly biased selection of which results to report (this might as well be data faking). So while I think I agree with you regarding linguistics/cog sci, I'm not sure the same holds across "science" writ large.
      - Finding a "paper trail" of all wasted effort is simply not plausible. Requiring someone to "show you the money" before you believe the level of waste is a non-argument. I'm not saying we should go in infinite circles about where the burden of proof is; just pointing out it cuts both ways.
      - There are more dimensions of what counts as "damage to science" than what I think you have in mind. Even if Hauser's ideas are ultimately vindicated/replicated, the negative publicity isn't a non-issue.

      But I ABSOLUTELY agree with you about where we should be spending our energies, especially in our field(s). Spending time and energy to train people better, both in methods and in theory, is likely to be far more productive than spending energy hunting down the data-fakers. Personally, I am an optimist. I suspect that the pathologies that lead to knowing data fakery are relative rare, AT LEAST IN LOW STAKES FIELD LIKE OURS (to hammer on my first point). And "changing" those people is probably not worth the effort. But there are tons of well-meaning researchers who could be unknowingly contributing to false results or theoretical dead-ends, and if these folks could get trained a little better, they would certainly try to do things better. So I really do agree with you in the conclusion, and I really did follow your post the first time. Just trying to point out a couple of other dimensions to the problem. Thanks for the thought-provoking post and discussion!

    3. I'll let you have the last word because I find I agree with most all of it. Thx to you.

  3. I fully agree with Scott's comment. I also think some [many] of the reader's of this blog may find the following informative:

  4. I agree with CB that everyone should read this. Its content does more to discredit the intellectual insight and integrity of Pieter Seuren than anything I could possibly pen.

    1. small correction: i did not say everyone should read but merely suggested some/many people might find it informative [and informative does not imply I endorse the content]...

      But the comment made me curious; Assuming for argument's sake that every single point made by Seuren is incorrect; is one blog post really enough to "discredit the intellectual insight" of a person? Of course I was not around at the time of the Generative Semantics disagreements [I don't like the term 'war' in this context] - so maybe someone who was can elaborate?

    2. Perhaps its most unsatisfactory feature is the complete failure to disucss the rise of formal semantics aka at the time Montague Grammar as a factor in the fall of Generative Semantics: Montague made it abundantly clear to most people that you couldn't deduce syntactic form simply by noting facts about the meaning; some prominent Generative Semanticsts became formal semanticists more or less immediately, and who knows how many people who might have become Generative Semanticists didn't for the same reason.

    3. Thanks Avery. I agree that this is an oversight to say the least. But it would seem odd, even for Norbert, to base his evaluation that the post's "content does more to discredit the intellectual insight and integrity of Pieter Seuren than anything I could possibly pen" on Seuren's failure to discuss Montague Grammar.

    4. I only said it was the worst. The second worst would be that a substantial amount of work appeared, most notably Chomsky's own "Remarks on Nominalizations" but also things by Anderson, Jackendoff, Schachter and others, that showed that "lexicalist" structures easily managed a lot of things that were problematic under GS. So Montague shows that the GS assumptions about the relation between meaning and form are arbitrary, and assorted generativists show that they produce wrong results. That's quite a sufficient motivation for GS to disappear, without any dubious activities on Chomsky's part.

    5. Thanks Avery, I understand that the worst does not mean the only. But it would seem if not even the worst justifies Norbert's evaluation whatever is 'not worst' won't either.

      Now what you mention above are mostly disagreements with Pieter about whether GS was a superior view [at the time - not superior to whatever the 'interpretative' branch developed into later]. And your view seems somewhat at odds with the account Huck&Goldsmith [1995] give [those two are hardly Chomsky-haters].

      I also think one does not have to go all the way to picture Chomsky as 'the evil demon' who bears sole responsibility for all hostility of the time [I doubt even Pieter does that, I certainly don't!] to accept that Chomsky was pretty much pulling the strings and that some of his decisions seemed not entirely based on what looked like the most promising approach at the time. Jackendoff expresses surprise that in 1966 Chomsky's response to his paper on interpretive pronominalization "gave [him] licence to go ahead and develop his approach" [Jackendoff, 1995, 99]. Now that was long before any of the problems with GS you mention arose and it seems at least legitimate to wonder why Chomsky withdrew his support for GS at this early time.

    6. This comment has been removed by the author.

    7. H&G is not in my library, but before it goes comic, Elan Dresher's review of it (‎) would strongly suggest that my viewpoint is sustainable.

    8. Interesting that you would suggest to read a review instead of the book after we just discovered a short while ago that the 2 reviews of "Of minds and Language" here: and here are so different that readers wonder whether the same book was reviewed. So maybe it might be best to actually read H&G before making up your mind.

      BTW, I found the review by Dresher entertaining but fear he got a bit carried away describing his dreams [or nightmares?] instead of the book. I do not know Geoffrey Huck very well but have talked to John Goldsmith and learned that he thinks a great deal of Chomsky. So he hardly would have depicted him as anything resembling "an ancient photo of Chomsky, magnified to frightening dimensions. The word “Ideology” is stamped in red under each eye".

      May I also draw your attention to this nice contradiction: Dresher writes: [1] "Since there was [according to H&G] nothing wrong with Generative Semantics as a theory, and since Chomsky’s arguments against it were faulty..."

      But H&G never claimed there was nothing wrong with GS and Dresher himself cites POSTAL saying:

      [2] “The bad thing is not that Generative Semantics disappeared but that the other branch of transformational theory didn’t disappear.”

      So certainly the claim could not have been NOTHING was wrong with GS. And if you somehow got the impression I believe nothing was wrong with GS you are mistaken - I do not think that at all. I also did not challenge that its demise was likely caused by many factors. I had wondered with Jackendoff why Chomsky had in 1966 supported his proposal over the at THAT TIME more promising proposals of Ross, Lakoff etc.

    9. The review was available on the web, the book wasn't. & Chomsky has quite a history of being supportive of grad students trying out different things, in spite of his take-no-prisoners approach to public debate with them once thy have their PhD's.

      I'd add that nobody has any obligation to completely establish that some approach is a total dead end before trying something different. Fritz Newmeyer tells us ( that the ms of Remarks on Nominalizations appeared in 1967 (that was also my recollection), so C might well have been having second thoughts about GS ideas by 1966.

  5. And more, my purpose here was not to start yet another inquest on the demise of generative semantics, but to point out that a rather obvious reason for dismissing Pieter's blog entry #2 as of rather low quality is its failure to mention what is arguably the most important factor in the ultimate disappearance of generative semantics (especially from its strongholds in the midwest, out of range of Chomsky's fleet of black helicopters, even if he had ever had such a thing).

    1. Again, we do not disagree and you continue missing the only point I expressed disagreement about [with Norbert not with you]. I never claimed Pieter's blog entry was of high quality. If fact i said: "Assuming for argument's sake that every single point made by Seuren is incorrect; is one blog post really enough to "discredit the intellectual insight" of a person?" So I implicitly granted that everything Pieter had said was wrong [I do not think that's the case but lets not argue about that]. I took issue with the sweeping claim Norbert made about Pieter's intellectual insight in general based on ONE blog.

      So my question "Of course I was not around at the time of the Generative Semantics disagreements [I don't like the term 'war' in this context] - so maybe someone who was can elaborate?." was not about whether or not Pieter's blog was good but whether it was so bad that based on it alone one can 'discredit his intellectual insight'.

      You replied the worst part was failure to mention Montague impact. I granted you that was a failure but said that hardly justifies 'discrediting Pieter's intellectual insight' in the general terms Norbert suggested. There is really no need that you and I quibble about who was guilty of what during the GS disputes because that was never at issue [at least not for me, and i even posted something on Pieter's blog suggesting that his perspective might be skewed].