Let me add a word or two.
First, one very significant difference (for me) between the Hauser case and the other two Bhattacharjee (the author) mentions is that all of Hauser's disputed work was REPLICATED. This is really a very big deal. It means that whatever shortcuts there may have been, they were inconsequential given that the results stand. In fact, I would go further, if the stuff replicates then this constitutes prima facie evidence that the investigation was done right to begin with. Replicability is the gold standard. The problem with Suk's results and Stapel's is that the stuff was not only dishonest, but totally unstable, or so I gather from the article. Only they could get their results. In this regard Hauser's results are very different.
Second, as Marc points out, it looks like Stapel's work really didn't matter. There was nothing deep there. Nor, a priori, could you expect there to be. His experiments were unlikely to touch the underlying psychological mechanisms of the behavior of interest because the kinds of behaviors Stapel is interested in are just too damn complicated. Great experiments isolate single causal factors. The kinds of powers implicated in this experiment are many and interact, no doubt, in many complex ways. Thus, there is no surprise that the effect sizes were expected to be small (Stapel had to cook the numbers so that the effect sizes did not appear to be large ("He knew that the effect he was looking for had to be small in order to be believable..")). He was trolling for statistical, not scientific, significance. His ambitions were political rather than scientific. At any rate, he believed that the appearance of credibility was tied to having small significant effect sizes. Maybe the right question is why anyone should care about small effects in this sort of domain to begin with.
Third, I think that Bhattacharjee ends by noting that fraud is not likely to be the most serious polluter of the data stream. Let me quote here:
"Fraud like Satapel's -brazen and careless in hindsight- might represent the lesser threat to the integrity of science than the massaging of data and selective reporting of experiments...tweaking results [NH]- like stopping data collection once the results confirm the hypothesis - is a common practice. "I would certainly see that if you do it in more subtle ways, it's more difficult to detect," Ap Dijksterhuis, one of the Netherlands best known psychologists, told me...
So, is fraud bad? Sure. Nobody endorses it in science any more than anywhere else. But, this article, as Marc so eloquently notes, shows that it is easier to do where one is trolling for correlations rather than exploring underlying causal powers (i.e. in the absence of real theory) and where effect sizes are likely to be small because of the complex interaction of multiple causal factors. Last, let's never forget about replicability. It matters, and where it exists, fraud may not be easily distinguishable from correct design.