Friday, June 8, 2018

Science without theory

Sometimes we just don't know much and this puts us in an odd epistemic position. Not knowing much comes with an imperative of intellectual modesty: one should have relatively little confidence in one's descriptions of what is going on and even less in projections of what might happen counterfactually. Being ignorant is a real bummer, especially scientifically.

Now, all of this should be obvious. And to many it is. But is is also something that working scientists have a professional interest in "bracketing" (a terms that roughly means "setting aside" that I learned as a philo grad student and that has come in very handy over the years as it sounds so much better than "ignore" (something an honest intellectual should not do (purportedly)) which is more or less what it amounts to), and so, not surprisingly, they largely do. Moreover, as nobody gets kudos for advertising their ignorance ((well, there was Socrates, I guess) or less kudos and occasionally a hemlock milkshake), scientists, especially given the current incentive system, are less restrained in making the flimsiness of their conclusions apparent than perhaps they should be. And this is a problem, for it is really hard to say anything useful when you have no idea what the hell is going on.

Why do I mention this? Rob Chametzky sent me a recent paper on this topic (here). The post (the author is Denny Borsboom (here), so I will dub the post DB) makes the reasonable point (reasonable to me as I have been making it as well for a while now) that the absence of "unambiguously formalized theory in psychology" lies behind much of the "replication" crisis in psych (p.1).[1]It, moreover, suggests that this is more or less endemic to the discipline becausepsychology has the hardest subject matter ever studied(p. 3). I do not know whether this last point is correct, but it is certainly true that much of what psychologists insist on studying is almost surely the product of many many interacting systems (e.g. almost any social psych topic!) and it is well-known that interaction effects are very hard to disentangle, very very hard. So, it is not surprising that fit hese topics are the sorts of things that psychologists study then the level of non-trivial theory that exists is close to zero. That is what one would expect (and what one finds). DB traces out some of the implications of this for the replication crisis. Here are some consequences:

·      The field is heavily stats dependent, as stats methods substitute for theoretical infrastructure.
·      The role of stats can grow so great as to induce theoretical amnesia on the practitioners (a mental state wherein those in the field no longer know what a theory is(p.2).
·      Progress in atheoretical psych is necessarily very slow given that experiments are always trying to factor out poorly understood context dependent variables.
·      The discipline is susceptible to fadsbased on poorly tested generalizationsthat serve to make research manageable (at best) and allows for a kind of predatory free-riding (at worst).

Needless to say, this is not a pretty state of affairs. The solution for this? Well, of course, more care with the stats and a kind of re-education system for psychologists: It would be extremely healthy if pshychologists received more education in fields which do have some theories, even if they are empirically shaky onesso that the discipline can try to remember what a theory is and what it is good for, so that we dont fall into theoretical amnesia(p.3). 

I cannot say that I find DB's description all that far off base. However, I think that a few caveats are in order. Here are some.

First, why is theoretical amnesia a bad thing if the field is doomed to be forever theoryless given its endemic difficulty? It is useful to understand how theory functions in a field where it does so usefully if this utility can be imported into one's own. Then being able to recognize it and value it is important. But if this is impossible then why bother?  

I suspect that DB's real gripe is that there is theory to be had (maybe by reshaping the topics studied) but that psychologists have been trained to ignore it and to substitute stats methods for theoretical insight. If this DB's point, then it is an important one. And it applies to many domains where empirical methods often overrun their useful boundaries. If this is DB's point, then there is a better way to put it: stats, no matter how technically fancy, cannot substitute for theory. Or, to put this in lay terms: lots of data carefully organized is not a theory, and thinking it is is just a confusion.

Second, the general point that RB makes (and I agree with) is not at all idiosyncratic. Gellman has made the point here (again) recently. The last paragraph is a good summation of RB's basic point:

…hypotheses in psychology, especially social psychology, are often vague, and data are noisy. Indeed, there often seems to be a tradition of casual measurement, the idea perhaps being that it doesn’t matter exactly what you measure because if you get statistical significance, you’ve discovered something. This is different from econ where there seems there’s more of a tradition of large datasets, careful measurements, and theory-based hypotheses. Anyway, psychology studies often (not always, but often) feature weak theory + weak measurement, which is a recipe for unreplicable findings.

Having little theory is real problem even if one's aim is to get a decent stats description of the lay of the land. The problems one finds in theoryless fields is what one should expect and the methodological sloppiness comes with the territory. As Gellman puts it:

p-hacking is not the cause of the problem; p-hacking is a symptom. Researchers don’t want to p-hack; they’d prefer to confirm their original hypotheses. They p-hack only because they have to.

If this is right, however, I think that both Gellman's and RB's optimism that this can be solved by better methodological hygiene is probably unfounded. The problem is that there is a real cost to not knowing what the hell is going on, and that cost is not merelytheoretical but observationalas well. Why? 

Here is a good place to play the Einstein card (roll the drums): theory is implicated in determining what counts as observational! Here's a quote: It is the theory which decides what we can observe.[2]To know what to count and how to count it (that's what stats does) you need a way of determining what should be counted and how (that's what theory does). So, no theory, no observations in the relevant scientific sense either. And if this is so, then when you really have no idea what the hell is going on, then you are in deep doo-doo whether you know it or not. Can stats help? I doubt it. Being very careful and very cautious might help. But really the only thing to do in these cases is pray to the God of scientific traction for a bit of luck in getting you started. There is a reason why researchers who figure out anything are heros. They are the people whose ideas allow us to get the ball rolling. Once it's rolling it's a whole new game. Until then. Nada! So, I doubt that the techno optimism that Gellman and RB point to, the idea that sans theory stats can step in and allow us to do another kind of useful science, will really fly. But, then I am a pessimist in general.

Third, I don't think that what holds for social psych is characteristic of the whole endeavor. There are large parts of psych broadly understood (e.g. large parts of learning theory (see Gallistel's work on this), development (see Carey and Spelke and Baillargeon and R. Gelman a.o. for example), perception, math capacities, language, where there is quite a bit of decent non-trivial theory that usefully guides inquiry). The problem is that social psych is where the fame and money are. You get on NPR for work on power poses, but not on Weber's law. 

Fourth, this is really not the state of play in most of linguistics. We really do have some decent theory to fall back on in many parts of the core discipline (syntax, phonology, parts of semantics) and that is why we have been able to make non-trivial progress. The funny thing is that if RB and Gellman are correct, the brouhaha over linguistic data foisted upon the field by the stats inclined has things exactly backwards for if they are right the methods adopted by fields without any theoretical ideas are bad models for those that have some theoretical sub-structure.

Fifth, what RB and Gellman describe is really what we should expect. There is a long-standing hope that there exists a mechanical way of doing science (i.e. gaining insight). If we were just careful enough in how we gathered data, if we only got rid of our pre-conceptions, if only our morals were higher, we could just look and see the truth. The problem this simple method fails is because we don’t do it right. This is, of course, a reflection of the old Empiricist dream (see the previous post for the logical Positivist version of this). It repeatedly fails. It will always fail. 

That’s it. Thx to Rob for sending me the RM piece. 


[1]The adjective “formalized” actually understates the problem. There is precious little non-trivial theory in large parts of psych (especially social psych, the epicenter of the crisis), formalized or not. The problem is not “formalization”per se.  
[2]Quoted in What is realby Adam Becker, p. 29. The philosopher Gorver Maxwell made a similar point oh so many years ago: “It is theory…which tells us what is or is not…observable” (Becker 184). If this is right, then the idea of grounding science on some a prioriconception of the observable independent of any theoretical assumptions is pretty much a non-starter. We indulge in “theory” either explicitly or tacitly. The optimism arises, I believe, by ignoring this or by hoping that for much of what we look at the tacit theory is pretty solid. Given that such theory will, when inexlicit, revert to “common sense,” this hope strikes me as idle given that common sense is precisely what scientific insight almost always overturns.

No comments:

Post a Comment