The comments, especially by always level headed David Pesetsky, have provoked the following restatement of my beef with falsificationism. Its main defect is that it presents a lopsided and hence counter-productive image of scientific practice. It thus encourages the wrong methodological ideals and this has a baleful effect on the research environment. How so?
First, falsificationism tacitly assumes that the primary defect a proposal/story/theory can have is failure to cover the data. In other words, in tacitly presents the view that the primary virtue a theory has is data coverage. In sophisticated versions, other benchmarks are recognized, however, even non-naïve versions of falsificationism, in virtue of being falsificationist, place primary stress on getting the data points well organized. Everything else is second to this. I don’t buy this. In the sciences in general, and linguistics in particular, the enterprise is animated by larger ‘why’ questions and the goal of a proposal is to answer (or at least address) these. This means that there are at least two dimensions in proposal evaluation (PE): (i) how they cover the “facts,” and (ii) how they advance explanation. Moreover, neither is intrinsically more important than the other, though in some contexts one is weighted higher than the other. A good project is to try to identify these contexts, at least roughly. However, the null position should be to recognize that both are equally important dimensions of evaluation ceteris paribus.
Second point: in practice, I believe, the two virtues are not genrally equally weighted. For lots of linguistics (and many outside critics of the field), the second is only weakly attended to. Many think it meet to devalue a proposal if it does not cover the “relevant” facts. The number of times this criticism has been levied with approval are numerous. Far rarer are the times when suspicion is cast on a proposal because it entirely misses the explanatory boat. However, and here is my main point, failure to advance the explanatory agenda is just as problematic as failure to cover some data points. In fact, in practice, it is often easier to evaluate if a particular story has any Explanatory Oomph (EO) than to determine whether the data points not covered are actually relevant. We all agree (or should) that only data points worth covering are the relevant ones and that determining what’s relevant is no simple matter. However, we often act as if it is clear what the relevant data is whereas what the explanatory goal is we take to be irremediably obscure. I don’t agree.
The goal of early syntactic theory was to explain why grammatically competent humans were able to use and understand sentences never before encountered. Answer: they had internalized generative grammars composed of recursive rules. The fine structure of possible grammars were investigated and some consensus was reached on what kinds of rules they deployed and constraints they obeyed. This set up the next question (Plato’s Problems): what allows humans to develop generative grammars? And we answered this by attributing to human minds an FL and the project was to describe it. We concluded that an FL with a principles and parameters architecture in which parameter values are set by PLD would explain why we are linguistically capable, i.e. a description of how this is done, answers the ‘why’ question. The current Minimalist project, I have argued, rests on a similar ‘why’ question: why do we have the FL we have and not some other conceivable kind? And we are looking for an answer here too. Roughly we are betting on the following kind of story being right: FL is a congery of domain general powers (aka operations and principles) with a small dollop of linguistic specificity thrown in. If we can theoretically actualize this sort of picture we will have answered Darwin’s Problem, yet another ‘why’ question. My modest proposal: part of proposal evaluation should involve seeing how a particular story helps us answer these questions.
Let me go further. There are times to emphasize one of the two criteria over the other. A good time to have high regard for (ii) is when a program is starting out. The goal of the early stages of inquiry into a new question is to develop a body of doctrine (BOD) and this in practice requires putting some recalcitrant facts to the side. One develops a BOD by showing what a proposal buys you, in particular how, if correct, it can address an animating ‘why’ question. When there is some BOD with some EO in place, empirical coverage becomes crucial, as this is how we refine and choose between the many basic approaches all of which are of the right kind to answer the motivating ‘why’ question. Of course, both activities go on at the same time. It’s not like for the first 10 years we value (ii) and ignore (i) and vice versa for the second ten. Research is not so discrete. However, there are times when answers to (ii) are hard to come by and at these times valuing PEs that potentially meet these kinds of demands is, I would argue, very very advisable. Again, the trouble with falsificationsim is that it encourages a set of attitudes that devalue the virtue of (ii).
Where does this leave us/me? I agree with David Pesetsky that there are different levels of ‘why’ questions, and that they can be pursued in parallel. Addressing one does not preclude addressing another. I also agree that we pursue these higher-level questions by making proposals of how things work. I also endorse the view that how and why are intimately intertwined. However, I suspect that there might be a disagreement of emphasis: I think that whereas we both value how “data” allows us to develop and judge our proposals, we don’t equally weight the impact of EO. David has no problem relegating the big ‘why’ questions to “the grand scheme of things” making it sound like some far off fairyland (like Keyne’s long run, it’s where we are all dead?). True, David mitigates this by adding the qualifier that we should not try and live there “all the time,” suggesting that occasional daydreaming is fine. However, my point is that even in the “more humble scheme of things” when we work on detailed analyses of specific phenomena, indeed when we muddle along finding “semi-organized piles of semi-analyzed, often accidental discoveries,” even then we should try to keep our eyes on the explanatory prize and ask how what we are doing bears on these animating questions. Why? Because they are important in evaluating what you are doing no less than seeing if the story covers some set of forms/sentences in some paradigm. Both are critical, though to my eyes only one (i.e. (i) above) is uncontroversially valued and considered part of everyone’s every day research basket of values. This is what I want my rejection of falsificationsim to call into question as it is something that even sophisticated versions (which of course are correct if sophisticated in the right ways), tend to still operationally relegate to a secondary position.
 I would normally use ‘theory’ in place of ‘proposal’ but it sounds too grand. I think that non encompassing self-perceived smaller scale projects are subject to the same dual evaluation streams.
 Let me add before I am inundated by misplaced comments that I do not believe that we have “solved” Plato’s Problem. I have written about this elsewhere.