The comments, especially by always level headed David
Pesetsky, have provoked the following restatement of my beef with
falsificationism. Its main defect is
that it presents a lopsided and hence counter-productive image of scientific
practice. It thus encourages the wrong methodological ideals and this has a
baleful effect on the research environment.
How so?
First, falsificationism tacitly assumes that the primary
defect a proposal/story/theory can have is failure to cover the data. In other words, in tacitly presents the view
that the primary virtue a theory has is data coverage. In sophisticated versions, other benchmarks
are recognized, however, even non-naïve versions of falsificationism, in virtue
of being falsificationist, place primary stress on getting the data points well
organized. Everything else is second to this. I don’t buy this. In the sciences
in general, and linguistics in particular, the enterprise is animated by larger
‘why’ questions and the goal of a proposal is to answer (or at least address)
these. This means that there are at
least two dimensions in proposal
evaluation (PE)[1]:
(i) how they cover the “facts,” and (ii)
how they advance explanation. Moreover, neither is intrinsically more important
than the other, though in some contexts one is weighted higher than the
other. A good project is to try to
identify these contexts, at least roughly. However, the null position should be
to recognize that both are equally important dimensions of evaluation ceteris paribus.
Second point: in practice, I believe, the two virtues are not genrally equally weighted. For lots of linguistics (and many outside
critics of the field), the second is only weakly attended to. Many think it meet to devalue a proposal if
it does not cover the “relevant” facts. The number of times this criticism has
been levied with approval are numerous.
Far rarer are the times when suspicion is cast on a proposal because it entirely
misses the explanatory boat. However, and
here is my main point, failure to advance the explanatory agenda is just as
problematic as failure to cover some data points. In fact, in practice, it is often easier to
evaluate if a particular story has any Explanatory Oomph (EO) than to determine
whether the data points not covered are actually relevant. We all agree (or
should) that only data points worth covering are the relevant ones and that determining what’s relevant is no simple
matter. However, we often act as if it is clear what the relevant data is
whereas what the explanatory goal is we take to be irremediably obscure. I don’t agree.
The goal of early syntactic theory was to explain why
grammatically competent humans were able to use and understand sentences never
before encountered. Answer: they had internalized generative grammars composed
of recursive rules. The fine structure of possible grammars were investigated
and some consensus was reached on what kinds of rules they deployed and
constraints they obeyed. This set up the
next question (Plato’s Problems): what allows humans to develop generative
grammars? And we answered this by attributing to human minds an FL and the
project was to describe it. We concluded
that an FL with a principles and parameters architecture in which parameter
values are set by PLD would explain why we are linguistically capable, i.e. a
description of how this is done, answers the ‘why’ question.[2]
The current Minimalist project, I have argued, rests on a similar ‘why’
question: why do we have the FL we have and not some other conceivable
kind? And we are looking for an answer
here too. Roughly we are betting on the following kind of story being right: FL
is a congery of domain general powers (aka operations and principles) with a
small dollop of linguistic specificity thrown in. If we can theoretically actualize this sort
of picture we will have answered Darwin’s Problem, yet another ‘why’
question. My modest proposal: part of proposal
evaluation should involve seeing how a particular story helps us answer these
questions.
Let me go further. There are times to emphasize one of the
two criteria over the other. A good time
to have high regard for (ii) is when a program is starting out. The goal of the
early stages of inquiry into a new question is to develop a body of doctrine
(BOD) and this in practice requires putting some recalcitrant facts to the
side. One develops a BOD by showing what a proposal buys you, in particular how,
if correct, it can address an animating ‘why’ question. When there is some BOD
with some EO in place, empirical coverage becomes crucial, as this is how we
refine and choose between the many basic approaches all of which are of the
right kind to answer the motivating ‘why’ question. Of course, both activities go on at the same
time. It’s not like for the first 10 years we value (ii) and ignore (i) and
vice versa for the second ten. Research
is not so discrete. However, there are
times when answers to (ii) are hard to come by and at these times valuing PEs
that potentially meet these kinds of demands is, I would argue, very very
advisable. Again, the trouble with falsificationsim is that it encourages a set
of attitudes that devalue the virtue of (ii).
Where does this leave us/me?
I agree with David Pesetsky that there are different levels of ‘why’
questions, and that they can be pursued in parallel. Addressing one does not preclude addressing another. I also
agree that we pursue these higher-level questions by making proposals of how
things work. I also endorse the view
that how and why are intimately intertwined. However, I suspect that there
might be a disagreement of emphasis: I think that whereas we both value how
“data” allows us to develop and judge our proposals, we don’t equally weight
the impact of EO. David has no problem
relegating the big ‘why’ questions to “the grand scheme of things” making it
sound like some far off fairyland (like Keyne’s long run, it’s where we are all
dead?). True, David mitigates this by
adding the qualifier that we should not try and live there “all the time,”
suggesting that occasional daydreaming is fine.
However, my point is that even in the “more humble scheme of things”
when we work on detailed analyses of specific phenomena, indeed when we muddle
along finding “semi-organized piles of semi-analyzed, often accidental
discoveries,” even then we should try to keep our eyes on the explanatory prize
and ask how what we are doing bears on these animating questions. Why? Because they are important in evaluating
what you are doing no less than seeing if the story covers some set of
forms/sentences in some paradigm. Both
are critical, though to my eyes only one (i.e. (i) above) is uncontroversially
valued and considered part of everyone’s every day research basket of values.
This is what I want my rejection of falsificationsim to call into question as
it is something that even sophisticated versions (which of course are correct
if sophisticated in the right ways), tend to still operationally relegate to a
secondary position.
[1]
I would normally use ‘theory’ in place of ‘proposal’ but it sounds too grand. I
think that non encompassing self-perceived smaller scale projects are subject
to the same dual evaluation streams.
[2]
Let me add before I am inundated by misplaced comments that I do not believe that we have “solved”
Plato’s Problem. I have written about
this elsewhere.
Norbert,
ReplyDeleteI agree with you, except (surprise!) where you criticize me:
"David has no problem relegating the big ‘why’ questions to 'the grand scheme of things' making it sound like some far off fairyland [...] suggesting that occasional daydreaming is fine."
Not at all what I had in mind! And also I absolutely agree that we should always "try to keep our eyes on the explanatory prize and ask how what we are doing bears on these animating questions". The thing is, linguists are, I think, allowed to enjoy the journey as well as the destination. I thought your posting was a bit gloomy on that score.
Also, I think (and maybe you disagree) there is a lot of proven value in linguistic investigations whose big-picture pay-off is unclear at the beginning of the project. Let me explain what I mean. I know of many great papers whose stated main point is a big-picture high-oomph idea. So you might think the linguist actually began by asking the big question, giving an exciting answer, and then going on a search for evidence -- but actually that's not what happened. The work really started with weeks of mucking around semi-aimlessly with crazy data from some language, asking low-level questions ("If we replace "Mary" with "John" do the judgments change?"), trying to figure out what the hell the data could be telling us -- and finally figuring out in an "aha" moment that (against all odds) the data bears on a deep question about FL that wasn't on anyone's mind when the project began. Then the paper gets written backwards (as it should be), with the big idea up front, and the evidence second, so the reader never knows how it all really happened.
Why did I write my comments? Well, I just don't want linguists who are spending a happy month of July trying to figure out some weird verb to think they are going to get hit over the head by the Minimalist cops because they can't immediately say what the explanatory payoff will be. I've just seen too many "explanatory prizes" won by linguists whose path to Minimalist success was twisty and unpredictable to want to be too normative about doing linguistics.
Oh god, "Gloomy"! echh! I didn't want to sound gloomy. Of course, you do what you can do and who knows how we get to the "truth," in many mysterious and convoluted ways I assume. We try what we can, we give the best arguments we can, we hope the gaps get filled in and we revel in our successes. By all means, just keep thinking and working.
DeleteSo why did I write my post? It was in response to a falsification incident that I witnessed that suggested that thinking abstractly was a waste of time UNLESS the obvious problems could be handled first. This is what I find unfortunate. This is the kind of demands I don't like and find unsupportable. It is rare that a similar criticism from the other end (i.e. it doesn't explain) is launched as a critique. So, I wanted to defend the uncommon practice of being led by theoretical concerns. These, I would argue are legitimate, even if they confront empirical hurdles, much as the empirically successful is ok even if it has little EO. I favor a pluralistic attitude, but I find that theorists are constantly invidiously criticized for valuingthe EO pole over the empirical one . I just wanted to argue that we should even out the score card.
That agreed, go ahead and do the best you can. BOTH factors matter. I admire success however it comes. So long as we allow both kinds of research legitimacy, I'm fine and won't be the slightest bit cranky. However, as a matter of fact, I suspect that a paper to LI say has a better chance of being published if empirically heavy and explanatorily light than the reverse. And this, I believe indicates that the balance is not evenly weighted. This is what I object to. I don't intend (or want) to denigrate empirical research I just want to raise the respect for the more theoretical variety. I assume that you would agree?
Definitely - thanks for clarifying. ( I think I'm more sanguine about LI and similar journals than you are, though I get your point - but that's a separate conversation.)
DeleteIt's not clear to me that the distinction between (i) and (ii) is as clean as it's made out to be here. Perhaps I'm misunderstanding what exactly you mean by (i) and (ii), but if a proposal/theory explains something, that something is empirical.
ReplyDeleteSo, for example, grammatically competent humans being able to use and understand sentences never before encountered is an empirical phenomenon. To the extent that GB (or whatever theory) explains why this happens, it's explaining an observable fact.
Based on the description of GB's explanatory purpose as an example of (ii) and the mention of "detailed analyses of specific phenomena" in the last paragraph, which I take to be an example of (i), it seems like the difference between (i) and (ii) is quantitative, not qualitative.
Yes. Agreed. However, by 'empirical" I mean the standard kinds of data points (e.g. *John was believed was arrested, *John saw that Mary loved himself, or Complementizer agreement shows you are wrong). I am happy with the quantitative descriptor: it's not a difference of kind but a difference of the abstractness of the empirical concerns. But your point is well taken.
Delete"First, falsificationism tacitly assumes that the primary defect a proposal/story/theory can have is failure to cover the data. In other words, in tacitly presents the view that the primary virtue a theory has is data coverage. In sophisticated versions, other benchmarks are recognized, however, even non-naïve versions of falsificationism, in virtue of being falsificationist, place primary stress on getting the data points well organized."
ReplyDeleteFalsificationism is primarily concerned with whether the theory is true or false. It is nothing to do with organizing data points whatever that might be.
Even false theories -- homeopathy, Ptolemaic astronomy, psychoanalysis etc -- predict lots of things correctly so confirmation just doesn't work as a philosophy of science.
You need some way of getting rid of false theories; what takes the place of falsificationism in your theory?
I agree that EO (good term!) is crucial. If you have a theory A which completely fails to address the central problems of the field, then even if it has better empirical coverage than a competitor theory B which does address the central problem of the field, then you should prefer B rather than A. But how do we apply this in practice? Could you give an example? Do you mean MP replacing P and P because of Darwin's problem?
I have posted something to address your first point concerning what replaces falsificationism. As to your second question, there are several in the history of linguistics, I believe. The one relevant to my own current concerns involve how to weigh the virtues of unification of the GB modules. My own interest in the last decade has been to try and unify all the module specific principles in GB in terms of Merge. This, to speak informally, requires reducing all non-local dependencies to movement. Now doing this creates reams of empirical problems, as I found out in practice when I tried to reduce control to movement. However, I believed, and still do, that the virtues of this kind of unification are great when seen in the context of DP. So great that I have been willing to downplay apparent empirical problems, at least for a while. So, here is a concrete example, no doubt tinged with special pleading.
ReplyDeleteThere are other similar cases: the replacement of constructions with 'Move alpha' style rules led to a lot of empirical slack. This was slowly replaced by figuring out how to pack descriptive details that were lost into heads as features (Criteria in Rizzi's sense). Some construction grammarians have argued that this is not possible in general. However, it was evident that were we to go to a simpler transformational account and eliminate constructions as theoretical primitives that the observational s**t would hit the empirical fan. Still, it was a good idea even if it took 10 years to show it would work. And the work continues: think cartography.
Hope these examples help.