Methodological sadism (MS) is quite fashionable nowadays and nothing gets practitioners more excited than the possibility that someone somewhere is proposing something interesting (i.e. something that reaches beyond the sensory surface of things and that might possibly reveal some of the underlying mechanics of reality). You’ve all met people like this, and one of their distinctive character traits is a certain (smug?) assurance that when it comes to the philosophy of science, they are on the side of the angels. They love to methodologically demarcate the boundaries of legitimate inquiry so as to protect the weak minded from fake science.
Of course, the standard demeanor of MSers is severe. Yes they are tough. But standards must be maintained lest we slide joyfully to our scientific perdition. Like I said, you’ve all met MSers. Nowadays, at least in my little domain of inquiry, they are the media stars and have done a pretty good job convincing the outside world (and some on the inside) that GG is dead and that there is really nothing special about the cognitive powers required for language. I think that this is deeply wrong, and will write another brief arguing as much in the next post. But for now, I want to, once again, offer some prophylaxis against the most rabid form of MS, falsification. Here is a useful short antidote, a paper (which I will refer to as ‘AB’ (Adam Becker is author)) that touches all the right themes. Its main claim is that trying to demarcate science from non-science is a mugs game that relies on ignoring how real successful domains of inquiry have grown.
So what are the main themes?
First AB points out that SMers (my term not AB’s) adhere to a basic erroneous principle: “that a new theory shouldn’t invoke the undetectable” (2). Why? Because this makes it “unfalsifiable.” So, observability underlies falsifiability and both are used by “self-appointed guardian[s], who relish dismissing some of the more fanciful notions in physics, cosmology and quantum mechanics [and linguistics! NH] as just so many castles in the sky” in order to protect science from “from all manner of manifestly unscientific nonsense” (2).
There are ways of understanding falsifiability that seem unobjectionable, namely that theories that never can have observable consequences are thereby undesirable. Well, yeah. The problem is that this is a very low bar, and any stronger version that “turn[s] ingenuity into fact” must be “much more nuanced” (2). Why? Because falsifiability is hardly ever possible and observability is undefinable. Let’s consider both of these facts seriatim.
First falsifiability. This is impossible for scientific theories for the simple reason that any falsification can be patched up with the right ad hoc statement, leaving the rest of the theory the same. As AB puts it (correctly) (2):
Falsifiability doesn’t work as a blanket restriction in science for the simple reason that there are no genuinely falsifiable scientific theories. I can come up with a theory that makes a prediction that looks falsifiable, but when the data tell me it’s wrong, I can conjure some fresh ideas to plug the hole and save the theory.
Any linguist knows how true this is. Moreover, it is easier the less brittle a theory is, and our current theories tend to be very labile. You can bend them in many directions without ever hearing a creak let alone inducing a crack or a break. You loose nothing when adding a bespoke principle to explain recalcitrant data because the only thing there is to loose in doing this is explanatory power and there was not much of this to begin with in flexible theories. However, even with good theories that have some oomph, it is generally possible (I would say “always possible” but I am being mealy mouthed here) to plug the hole and carry on. One of the virtues of AB is that it provides some nice historical examples of this happening. As AB notes, the history of science is full of them.
AB recounts the famous one where Uranus’ odd (apparently non Newtonian) orbit begats Neptune, which in turn begats Vulcan to explain Mercury’s perihelion which finally fails when General Relativity replaces Newton. AB notes that each historical move makes sense and that looking for Neptune (victory!) and looking for Vulcan (failure!) were both rational despite the different outcomes. Of course, with hindsight, Neptune is a bold prediction that strengthens the theory and Vulcan turns out to have been just an unfortunate wrong turn. But Vulcan did not lead to people jumping the Newtonian ship. Rather “astronomers of the time collectively shrugged and moved on” (4). And rightly so. Do you really want to give up Newton just because Vulcan was impossible to spot? What then do you do with all the other stuff it does explain?
Note that this means that there is an asymmetry between potentially falsifying experiments that succeed and those that don’t. The former are declared as triumphs of the scientific will, while the latter are quietly shelved and de-emphasized (or, more accurately, added to the ledger of anomalies that a discipline collects as targets for yet unknown superior explanations yet to come). No reason to show these off in public and take the shine from the powerful rational methods of scientific inquiry.
Of course, one might eventually hit the jackpot and find the anomalies resolved with the right new theory. Mercury was a feather in General Relativity’s cap. And well it should have been, for it allowed us to dump Vulcan and replace it with a story that allowed us to also keep all the good parts of Newton. So yes exceptions prove the rule in the sense that Newtonian exceptions prove (justify) Relativity’s rules.
AB provides other examples, one of the best being Pauli’s proposal to save the Law of the Conservation of Energy via the neutrino. It took over 25 years to prove him right (it’s good to have descendants like Fermi who get interested in your ideas). But what is interesting is not merely the long wait time, but why it took so long: the neutrino had no properties at the time that Pauli proposed it that could have allowed it to be detected. So when Pauli proposed it, the theory was unfalsifiable because its basics were unobservable. But, as history shows, wait 25 years and who knows what unobservables might become detectable. As AB again, rightly, puts it (5):
It’s certainly true that observation plays a crucial role in science. But this doesn’t mean that scientific theories have to deal exclusively in observable things. For one, the line between the observable and unobservable is blurry – what was once ‘unobservable’ can become ‘observable’, as the neutrino shows. Sometimes, a theory that postulates the imperceptible has proven to be the right theory, and is accepted as correct long before anyone devises a way to see those things.
Not only is ‘observable’ irreparably vague, but MSers also often make a second more extreme demand concerning its applicability. They require theory to only postulate constructs with directly observable magnitudes. Laws of nature then simply relate “directly observable quantities” and make no reference “to anything unobservable at all” (6). This was essentially Mach’s views and, in hindsight, they served to hamstring scientific insight. For example, as AB notes, Mach opposed atomic theory as unscientific. Why? Well you cannot “see” atoms. Of course, as many pointed out, there is lots to be gained by postulating them (e.g. you can derive the principles of thermodynamics and explain Brownian motion). But this was not good enough for Mach, nor for current day MSers. So how did Mach’s views hold up? Well, it cost Walter Kaufman a Nobel Prize apparently and might have led to Boltzmann’s suicide, but aside from that it did not do any real damage because the scientific community largely ignored Mach’s injunctions.
But isn’t postulating unseen elements unscientific? Well no. Note: even if we all agree that theory must have discernible (aka observable) consequences so that it can be tested/verified, this does not imply that every part of the theory must invoke elements whose properties are directly observable. Of course, it is nice if one can do this. It is always nice to be able to measure. But the idea that the only thing worth doing is relating (perhaps statistically) the magnitudes of observable quantities is something that would sink our best sciences if implemented. And this is a good reason not to do it!
One can go further, and AB does. The notion of an observable is itself fundamentally obscure. It cannot mean, observable given “current” technology, for that is too strong. As the history of the neutrino “discovery” indicates, taking this position would have led away from the truth, not towards it. But it cannot mean “observable in principle” for this is irremediably vague. If we mean that it is not “logically possible” to observe what but contradictions will fail. If we mean given current technology or current theory it is too strong. So what then? AB, quotes Grover Maxwell as observing” “There is no a priori or philosophical criteria for separating the observable form the unobservable.” The best we can say is that theories with observable consequences are better situated than those without ceteris paribus. But what goes into the ceteris paribus determination is forever up for grabs, and subject to the inconclusive (yet critically important) vagaries of judgment. No method, just mucking around, always.
AB makes a last observation I’d like to highlight: unlike many, AB emphasizes that part of the scientific enterprise is building “sky-castles.” This is not a scientific aberration, nor an example of science misfiring, but part of the central enterprise. For the scientist (including the linguist see here) “[s]pinning new ideas about how the world could be – or in some cases, how the world definitely isn’t – is central to their work” (2). Explanation leans heavily on the modal ‘could’ in the quote. Not just what you see or mild extensions thereof, but what could be and couldn’t. That’s the stuff of understanding and as AB notes, again rightly, “[the] goal of scientific theory is to understand [my emphasis, NH] the nature of the world with increasing accuracy over time.”
Methodological sadists, if given power, would sink scientific inquiry. They would make it nearly impossible to uncover unobservable mechanisms for they are an inherent part of all decent explanation in the sciences. MSers undervalue explanation and hence distrust the speculation required to get any. As AB notes, their dicta are at odds with the history of science. They are also deeply obscure. So historically misguided and irredeemably obscure? Yes, but also sadistically useful. MSers sound tough minded (just the facts kinda people) but really they are hopeless romantics, stuck with a view of method and inquiry that successful inquiry has largely ignored, as linguists should as well, at least if they want to get anywhere.
 Pullum, Haspelmath, Tomasello and Everett are prominent examples of such in my own little world.
 The strong form would say that no theory should, not only new ones. However, MSers generally aspire to be gatekeepers and, in practice, this means keeping out the new. Facing out, rather than in, also has one important advantage. It is pretty hard to argue that accepted results are suspect without making one’s methodological injunctions sound dumb (recall, that every modus ponens comes with an equally powerful modus tolens). Consequently, fire is reserved for the novel, which is always deemed to differ from the accepted in being methodologically deficient. Note that being methodologically deficient has its virtues in argument. It relieves the critic of actually having to go into details, of having to do the hard work of arguing against actual results. MSers generally paint with a broad methodological brush, and I would argue that this is the reason why.
 Linguists here should be thinking of those that take Greenberg universals to be the only kinds that are legit (e.g. the crowd in note 1). Why do this? Well because as MSers they demand that science eschew the unobservable. Greenberg universals just are estimates of co-occurrence (either categorical or probabilistic) among surface visible language properties. Chomsky universals are not, and this is why for MSers Chomsky Universals are verboten.