Comments

Friday, June 30, 2017

Statistical obscurantism; math destruction take 2

I've mentioned before that statistical knowledge can be a dangerous thing (see here). It's a little like Kabbala, something that is dangerous in the hands of inexperienced, the ambitious and lazy.  This does not mean that in its place stats are not valuable tools. Of course they are. But there is a reason for the slogan "lies, damn lies and statistics." A few numbers can cover up the most awful thinking, sort of like pretty pix of brains in the NYT can sell almost any new cockamamie idea in cog-neuro. So, in my view, stats is a little like nitroglycerine; useful but dangerous on unsteady ground.

Now, even I don't really respect my views on these matters. What the hell do I know, really? Well, very little. So I will buck this view up by pointing you to an acknowledged expert on the subject who has come to a very similar conclusion. Here is Andrew Gelman despairing of the view that done right stats is the magic empirical elixir, able to get something out of any data set, able to spin scientific gold from any experimental foray:

In some sense, the biggest problem with statistics in science is not that scientists don’t know statistics, but that they’re relying on statistics in the first place.
How is stats the problem? Because it covers up dreadful thinking:
Just imagine if papers such as himmicanes, air rage, ages-ending-in-9, and other clickbait cargo-cult science had to stand on their own two feet, without relying on p-values—that is, statistics—to back up their claims. Then we wouldn’t be in this mess in the first place.
So, one problem with stats is that they can make drek look serious. Is this a problem with the good use of stats? No, but given the current culture, it is a problem. And as these pair of quotes suggests, if something absent the stats sounds dumb, then one should be very very very wary of the stats. In fact, one might go further: if the idea sans stats looks dumb then the best reaction on hearing that idea with stats is to reach for your wallet (ore your credulity).

So what does Gelman suggest we do? Well, he is a reasonable man so he says reasonable things:
I’m not saying statistics are a bad idea. I do applied statistics for a living. But I think that if researchers want to solve the reproducibility crisis, they should be doing experiments that can successfully be reproduced—and that involves getting better measurements and better theories, not rearranging the data on the deck of the Titanic.
Yup, it looks like he is recommending thinking. Not a bad idea.  The problem is that stats has the unfortunate tendency of replacing thought. It gives the illusion of being able to substitute technique for insight. Stats are often treated as the Empiricist's perfect tool: it is the method that allows the data speak for itself. And this is the illusion that Gelman is trying to puncture.

German has (given his posts of late) come to believe that this illusion is deeply desired. Here he is again replying to the suggestion that misuse of stats is largely an educational problem:
Not understanding statistics is part of it, but another part is that people—applied researchers and also many professional statisticians—want statistics to do things it just can’t do. “Statistical significance” satisfies a real demand for certainty in the face of noise. It’s hard to teach people to accept uncertainty. I agree that we should try, but it’s tough, as so many of the incentives of publication and publicity go in the other direction.
I would add, you will not be surprised to hear, that there is also the Eish dream I mentioned above wherein the aim is to minimize the human factor mediating data and theory. Rationalists believe that the world must be vigorously interrogated (sometimes put under extreme duress) to reveal its deep secrets. Es don't think that it has deep secrets as they don't really believe that the world has that much hidden structure. Rather the problem is with us: we fail to see what is before our eyes if we gather the data carefully and inspect it with an open heart. The data will speak for itself (which is what stats correctly applied will allow it to do). This Eish vision has its charms. I never underestimate it. I think that it partially lies behind the failure to appreciate Gelman's points.









3 comments:

  1. Of course, I couldn't agree more with this, and am loving how elegantly Gelman puts some of these points (it is also much appreciated that he has the street cred that he can say this without people assuming that he's just saying this because he's too dumb to do 'the right stats' or because he has some shitty data he wants to get a pass on). I loved his last point about people believing that stats give them certainty. I'm reminded of one recent discussion where an expert was quoted as saying that of course we don't need stats for the clear cases, we need them for the muddy cases where you would otherwise be uncertain about what happened. I've similarly had students tell me that the reason science needs stats is so that we can 'know for sure' whether some effect is 'real' or not. As if the stats were going to *remove* the uncertainty rather than just attempt to precisely quantify it! If fancy stats were aimed at removing uncertainty about the world, sign me up. But why the heck do I want to spend a lot of effort going after the ideal method for removing uncertainty about the exact extent of *my uncertainty*??

    ReplyDelete
  2. How is stats the problem? Because it covers up dreadful thinking

    Sometimes stats are just part of the dreadful thinking. For example, Gigerenzer et al (pdf) have a nice chapter on mindless, ritualistic use of statistical tests.

    ReplyDelete
  3. Gelman makes some great points. I'd only add that this: "Stats are often treated as the Empiricist's perfect tool: it is the method that allows the data speak for itself."

    is true, but perhaps incomplete. We all learn in stats class that statistics *don't* do that and can't do that, and sensible psychologists have been explaining this for the better part of a century. Someone--Jacob Cohen?--once said that stats classes largely served to identify the underlying critical assumptions and necessary conditions that students would go on to ignore when actually conducting analyses.

    I am fortunate to have worked in a field where stats are often unnecessary. Suppose I have two reasonably-sized groups of subjects--a control group and a lesion group. The control group does very well on Test X, whereas the lesion group entirely fails. I don't need stats to interpret those data, though reviewers have been known to ask.

    ReplyDelete