Methodological sadism (MS) is quite fashionable nowadays and
nothing gets practitioners more excited than the possibility that someone
somewhere is proposing something interesting (i.e. something that reaches
beyond the sensory surface of things and that might possibly reveal some of the
underlying mechanics of reality). You’ve all met people like this,[1]
and one of their distinctive character traits is a certain (smug?) assurance
that when it comes to the philosophy of science, they are on the side of the angels.
They love to methodologically demarcate the boundaries of legitimate inquiry so
as to protect the weak minded from fake science.
Of course, the standard demeanor of MSers is severe. Yes
they are tough. But standards must be maintained lest we slide joyfully to our
scientific perdition. Like I said, you’ve all met MSers. Nowadays, at least in
my little domain of inquiry, they are the media stars and have done a pretty
good job convincing the outside world (and some on the inside) that GG is dead
and that there is really nothing special about the cognitive powers required
for language. I think that this is deeply wrong, and will write another brief
arguing as much in the next post. But for now, I want to, once again, offer
some prophylaxis against the most rabid form of MS, falsification. Here
is a useful short antidote, a paper (which I will refer to as ‘AB’ (Adam Becker
is author)) that touches all the right themes. Its main claim is that trying to
demarcate science from non-science is a mugs game that relies on ignoring how
real successful domains of inquiry have grown.
So what are the main themes?
First AB points out that SMers (my term not AB’s) adhere to
a basic erroneous principle: “that a new theory shouldn’t invoke the
undetectable” (2).[2]
Why? Because this makes it “unfalsifiable.”
So, observability underlies falsifiability and both are used by
“self-appointed guardian[s], who relish dismissing some of the more fanciful
notions in physics, cosmology and quantum mechanics [and linguistics! NH] as
just so many castles in the sky” in order to protect science from “from all
manner of manifestly unscientific nonsense” (2).
There are ways of understanding falsifiability that seem
unobjectionable, namely that theories that never can have observable
consequences are thereby undesirable. Well, yeah. The problem is that this is a
very low bar, and any stronger version that “turn[s] ingenuity into fact” must
be “much more nuanced” (2). Why? Because falsifiability is hardly ever possible
and observability is undefinable. Let’s consider both of these facts seriatim.
First falsifiability. This is impossible for scientific
theories for the simple reason that any falsification can be patched up with
the right ad hoc statement, leaving
the rest of the theory the same. As AB puts it (correctly) (2):
Falsifiability doesn’t work as
a blanket restriction in science for the simple reason that there are no
genuinely falsifiable scientific theories. I can come up with a theory that
makes a prediction that looks falsifiable, but when the data tell me it’s
wrong, I can conjure some fresh ideas to plug the hole and save the theory.
Any linguist knows how true this is. Moreover, it
is easier the less brittle a theory is, and our current theories tend to be
very labile. You can bend them in many directions without ever hearing a creak
let alone inducing a crack or a break. You loose nothing when adding a bespoke
principle to explain recalcitrant data because the only thing there is to loose
in doing this is explanatory power and there was not much of this to begin with
in flexible theories. However, even with good theories that have some oomph, it
is generally possible (I would say “always possible” but I am being mealy
mouthed here) to plug the hole and carry on.
One of the virtues of AB is that it provides some nice historical examples
of this happening. As AB notes, the history of science is full of them.
AB recounts the famous one where Uranus’ odd
(apparently non Newtonian) orbit begats Neptune, which in turn begats Vulcan to
explain Mercury’s perihelion which finally fails when General Relativity
replaces Newton. AB notes that each historical move makes sense and that
looking for Neptune (victory!) and looking for Vulcan (failure!) were both
rational despite the different outcomes.
Of course, with hindsight, Neptune is a bold prediction that strengthens
the theory and Vulcan turns out to have been just an unfortunate wrong turn.
But Vulcan did not lead to people jumping the Newtonian ship. Rather “astronomers
of the time collectively shrugged and moved on” (4). And rightly so. Do you
really want to give up Newton just because Vulcan was impossible to spot? What
then do you do with all the other stuff it does explain?
Note that this means that there is an asymmetry
between potentially falsifying experiments that succeed and those that don’t.
The former are declared as triumphs of the scientific will, while the latter
are quietly shelved and de-emphasized (or, more accurately, added to the ledger
of anomalies that a discipline collects as targets for yet unknown superior
explanations yet to come). No reason to show these off in public and take the shine from the powerful rational methods
of scientific inquiry.
Of course, one might eventually hit the jackpot
and find the anomalies resolved with the right new theory. Mercury was a
feather in General Relativity’s cap. And well it should have been, for it allowed
us to dump Vulcan and replace it with a story that allowed us to also keep all
the good parts of Newton. So yes exceptions prove the rule in the sense that
Newtonian exceptions prove (justify) Relativity’s rules.
AB provides other examples, one of the best being
Pauli’s proposal to save the Law of the Conservation of Energy via the
neutrino. It took over 25 years to prove him right (it’s good to have
descendants like Fermi who get interested in your ideas). But what is
interesting is not merely the long wait time, but why it took so long: the
neutrino had no properties at the time that Pauli proposed it that could have
allowed it to be detected. So when Pauli proposed it, the theory was
unfalsifiable because its basics were unobservable. But, as history shows, wait
25 years and who knows what unobservables might become detectable. As AB again,
rightly, puts it (5):
It’s certainly true that
observation plays a crucial role in science. But this doesn’t mean that
scientific theories have to deal exclusively in observable things. For one, the
line between the observable and unobservable is blurry – what was once
‘unobservable’ can become ‘observable’, as the neutrino shows. Sometimes, a
theory that postulates the imperceptible has proven to be the right theory, and
is accepted as correct long before anyone devises a way to see those things.
Not only is ‘observable’ irreparably vague, but
MSers also often make a second more extreme demand concerning its
applicability. They require theory to only postulate constructs with directly
observable magnitudes. Laws of nature then simply relate “directly observable
quantities” and make no reference “to anything unobservable at all” (6). This
was essentially Mach’s views and, in hindsight, they served to hamstring
scientific insight. For example, as AB notes, Mach opposed atomic theory as
unscientific. Why? Well you cannot “see” atoms. Of course, as many pointed out,
there is lots to be gained by postulating them (e.g. you can derive the
principles of thermodynamics and explain Brownian motion). But this was not
good enough for Mach, nor for current day MSers. So how did Mach’s views hold
up? Well, it cost Walter Kaufman a Nobel Prize apparently and might have led to
Boltzmann’s suicide, but aside from that it did not do any real damage because
the scientific community largely ignored Mach’s injunctions.
But isn’t postulating unseen elements
unscientific? Well no. Note: even if we all agree that theory must have
discernible (aka observable) consequences so that it can be tested/verified,
this does not imply that every part of the theory must invoke elements whose
properties are directly observable. Of course, it is nice if one can do this.
It is always nice to be able to measure. But the idea that the only thing worth
doing is relating (perhaps statistically) the magnitudes of observable
quantities is something that would sink our best sciences if implemented. And
this is a good reason not to do it![3]
One can go further, and AB does. The notion of an observable
is itself fundamentally obscure. It cannot mean, observable given “current”
technology, for that is too strong. As the history of the neutrino “discovery”
indicates, taking this position would have led away from the truth, not towards
it. But it cannot mean “observable in principle” for this is irremediably
vague. If we mean that it is not “logically possible” to observe what but
contradictions will fail. If we mean given current technology or current theory
it is too strong. So what then? AB, quotes Grover Maxwell as observing” “There
is no a priori or philosophical
criteria for separating the observable form the unobservable.” The best we can
say is that theories with observable consequences are better situated than
those without ceteris paribus. But what goes into the ceteris paribus
determination is forever up for grabs, and subject to the inconclusive (yet
critically important) vagaries of judgment. No method, just mucking around,
always.
AB makes a last observation I’d like to highlight: unlike
many, AB emphasizes that part of the scientific enterprise is building
“sky-castles.” This is not a scientific aberration, nor an example of science
misfiring, but part of the central enterprise. For the scientist (including the
linguist see here)
“[s]pinning new ideas about how the world could be – or in some
cases, how the world definitely isn’t – is central to their
work” (2). Explanation leans heavily on the modal ‘could’ in the quote. Not
just what you see or mild extensions thereof, but what could be and couldn’t.
That’s the stuff of understanding and as AB notes, again rightly, “[the] goal
of scientific theory is to understand
[my emphasis, NH] the nature of the world with increasing accuracy over time.”
Methodological sadists, if given power, would
sink scientific inquiry. They would make it nearly impossible to uncover
unobservable mechanisms for they are an inherent part of all decent explanation
in the sciences. MSers undervalue explanation and hence distrust the
speculation required to get any. As AB notes, their dicta are at odds with the
history of science. They are also deeply obscure. So historically misguided and
irredeemably obscure? Yes, but also sadistically useful. MSers sound tough
minded (just the facts kinda people) but really they are hopeless romantics,
stuck with a view of method and inquiry that successful inquiry has largely
ignored, as linguists should as well, at least if they want to get anywhere.
[1]
Pullum, Haspelmath, Tomasello and Everett are prominent examples of such in my
own little world.
[2]
The strong form would say that no
theory should, not only new ones. However, MSers generally aspire to be
gatekeepers and, in practice, this means keeping out the new. Facing out,
rather than in, also has one important advantage. It is pretty hard to argue
that accepted results are suspect
without making one’s methodological injunctions sound dumb (recall, that every
modus ponens comes with an equally powerful modus tolens). Consequently, fire
is reserved for the novel, which is always deemed to differ from the accepted in
being methodologically deficient. Note that being methodologically deficient has its virtues in argument. It relieves
the critic of actually having to go into details, of having to do the hard work
of arguing against actual results. MSers generally paint with a broad
methodological brush, and I would argue that this is the reason why.
[3]
Linguists here should be thinking of those that take Greenberg universals to be
the only kinds that are legit (e.g.
the crowd in note 1). Why do this? Well because as MSers they demand that
science eschew the unobservable. Greenberg universals just are estimates of
co-occurrence (either categorical or probabilistic) among surface visible
language properties. Chomsky universals are not, and this is why for MSers
Chomsky Universals are verboten.
I'm not sure what to say about all this; it fundamentally misunderstands the scientific enterprise but suggests nothing that might take its place.
ReplyDeleteLet's start with falsification. As I tell my introductory students, scientific hypotheses and theories need to be testable (the term I prefer). If they're not, then yes--they're castles in the air, you'll never know if they're true, and we should relegate them to mere philosophical maundering, or maybe a Star Trek episode.
The whole bit about Newtonian physics actually supports the notion of testability. Here we've got a well-specified theory. Whoops--now we've got an observation, the orbit of Uranus, that doesn't fit. Now we've got two hypotheses: either (1) there's another planet out there, or (2) something's wrong with the theory. Upon testing, you find that the answer is (1). For the orbit of Mercury, the answer is (2), because we looked, and Vulcan doesn't exist. Newtonian physics was tested and found wanting. (You agree that Newtonian physics is false in an important way, right?) Testing the theory against reality taught us something. Now, it's an elementary misunderstanding to suggest that the *entire Newtonian theory* needs to be thrown in the trash; we just found out through empirical test that it has important constraints.
None of this precludes the spinning of fantastic hypotheses or odd theories--they just have to be testable to be of any worth. In short, you can have a castle in the air, but empirical evidence helps build the pilings. Otherwise we'd have an infinite number of (untestable) theories, and they'd be accepted (or not) based on the rhetorical skill of the person promulgating them.
Same for linguistics. I need some way to test *this* idea against *that* idea and see which one has support. If there's no way to do that, then the ideas are "not even wrong," as Pauli once said. It'd be like listening to Freud argue with Jung. (Incidentally, "untestable in principle" means something different from "untestable using current technology.") When I see that (for instance) Tomasello has data on how children acquire language and Chomsky doesn't seem to, I'm naturally drawn to Tomasello's views.
One more point on linguistics, particularly of the Chomskyan sort. The whole enterprise has the whiff of guruism: Chomsky is always already right, except in a few cases where he didn't go far enough; anyone who disagrees with him just doesn't understand him (with an implication that they're too stupid to do so); his ideas are utterly transformative yet so elementary as to be truisms; yet somehow his theories shed no light on areas that would intuitively seem proximate and relevant (like child language acquisition, language disorders, learning more generally, etc., etc.). Oh, and pay no attention to the data behind the curtain; that won't tell you anything at all.
I see thar close reading is not your thing. I sympathize. Yes we test...ultimately. But there is a potentially very long period before testing matters. As a practical matter, it is open ended, or so the history tells us. So sure, look for data, think up theories, try to fit them to the facts, rinse repeat.
Deletein my case, I confess to more than a whiff. IMO, Chomsky has set the problem completely correctly. I don’t always buy the details, but completely buy the meta theory, the basic methodology and the stated problematic. You clearly do not. I have no idea why as you never say. What do you think he has gotten wrong? Maybe you think that Gs are not recursive? Or there is no Poverty of Stimulus problem? But if so, you ate wrong. I doubt I could convince you, but then again in this matter we are symmetrical as I doubt you could convince me.
It is also false that Chomsky has been remote from acquistion or parsing or brain studies. Crain, Lidz, Yang, Gleitman, Gallistel, Friederici, Peoppel, Dehaene have all found ways of making more than tangential contact with his work and so deepen the program. Again, you likely disagree, but if so, I’d love to see a detailed argument. I think I once invited you to write a full throated defense of Tomasello, right? I even agreed to publish it. Do I recall correctly? If not here is an invite: make the criticism. I will give you room. Make the argument so we can all see where Chomsky screwed up. We will all find it instructive I am sure.
This comment has been removed by the author.
Delete(Apologies for the deleted posts.)
Delete> "When I see that (for instance) Tomasello has data on how children acquire language and Chomsky doesn't seem to..."
You can't be serious. Norbert did not state this strongly enough: there is no shortage of acquisition work within the GG/Chomskyan enterprise, including syntax specifically. And I don't mean theory, I mean experiments and corpus studies.
From AB: "Newtonian gravity was ultimately thrown out, but not merely in the face of data that threatened it. That wasn’t enough. It wasn’t until a viable alternative theory arrived, in the form of Einstein’s general relativity, that the scientific community entertained the notion that Newton might have missed a trick."
DeleteThe discoveries of Generative Linguistics are numerous and nearly every paper I have ever read in that field involves comparing the empirical predictions of (at least) 2 hypotheses. So, I'm quite puzzled at what you (Steve P) could possibly be so exercised about. If you have an alternative theory that covers the core phenomena that have been adduced by generative linguistics, let's see it.
As for the relevance of Generative Linguistics to language acquisition, I issued a challenge on this blog some years ago (http://facultyoflanguage.blogspot.com/2014/11/theres-no-poverty-of-stimulus-pish.html) inviting Poverty of the Stimulus skeptics to provide an account of a relatively straightforward case and am still waiting for serious engagement. Until I get that engagement, it seems reasonable to take your anxieties about falsifiability as a smokescreen for some other issue (like just not liking the ideas or some such).
This comment has been removed by the author.
ReplyDelete"... trying to demarcate science from non-science is a mugs game . . . ."
ReplyDeleteThis, from the end of the second paragraph of Norbert's post is basically correct, but there is something else nearby that is, arguably, less of mug's game.
At least, Massimo Pigliucci & Maarten Boudry have so argued, in the 2013 book they edited "Philosophy of pseudoscience: Reconsidering the demarcation problem".
As Boudry points out in his contribution (pp.31f) there are--of course, this is philosophy, after all!--Distinctions To Be Made. The most relevant here are those "between science and pseudoscience . . . [and between] science and "nonscience in general" (note, if you care, that these are layered distinctions, so to speak).
The point of the book is that it is both important, and, so they claim, possible to make the science vs pseudoscience distinction, though it won't look like what you thought it looked like, unless you were paying close attention to a topic (viz., demarcation) that has been mostly put on the shelf for 30+ years. And, in fairness, the book also has historical and sociological studies of pseudoscience.
None of this, I suppose I ought to add, should give aid or succor to practitioners of MS; rather the opposite, I might in fact expect.
--RC
Thank you to the author for sharing their insights, and to the readers for their participation in the ongoing conversation sparked by this thought-provoking piece. Lane bryant $15 off $15 coupon code. It's an excellent opportunity to snag a bargain and update your wardrobe while staying within your budget.
ReplyDeleteyour exploration of methodological sadism resonated with me, and the mention of eastside discount nursery as a metaphorical oasis added an intriguing layer of connection.
ReplyDelete