This note is spurred by some points made by Alex Clark (here):
Every scientific theory aims for truth. This is not the exclusive goal of falsificationism. However, finite beings that we are we cannot gaze directly at a theory and see if it is true or not. Hence we look for marks or signature properties of truth and falsity. Falsificationism puts great weight on one of these features and lesser weight on others. The first virtue, and not first among equals, is "getting the facts right." Non-naive falsificationists (e.g. Lakatos) then gussy this all up to the point that the method turns out to be "do the best you can realizing that it's a complex mess." This, of course, is a nice version of Feyerabend's "anything goes" but said more politely. I personally like Percy Bridgeman's version: "use your noodle and no holds bared." The net effect of these methodological nostrums is to make them impossible to apply qua method for there is nothing methodical about them. Given the (apparently) insatiable desire for mechanistic answers to complex issues, people fall back onto "covering the data" as the best applicable proxy. And this is where the problem arises.
The problem is two fold: (i) what the relevant data are is not self evident, (ii) this leads to ignoring all the stuff that makes the non-naïve approaches non-naïve. So, in practice, what we get is a ham handed application of the sophisticated views that, in practice, is just our old friend naïve falsificationism.
Let me say a quick word about (i). Naivety often begins with a robust sense of what the data is. As the sophisticates know, this is hardly self-evident. So take an example close to the home of some: does Kobele’s work demonstrate once and for all that natural language grammars are not mildly context sensitive because they fail to display constant growth? I have not witnessed mass recantations of the MCS view from my computational colleagues. No, what I have seen is attempts (actually more often hopes that some attempts would be forthcoming) to reanalyze the morphological and syntactic data (pronounced copies) so that it is defanged.
Let me add that I am entirely sympathetic to this effort in principle. It’s what we do all the time and it is one way that we try to protect our favored accounts. We do this not (merely) out of a misplaced love of our own creations (but who doesn’t love her own mental products?) but because often our favorite theories have EO that would be lost were it rendered false. Moreover, the more that would be lost the more rational it is to ask for a high level of proof before we give up on the old theory, and even then we will demand of the replacement that it get the previous theories successes right (usually as limit cases).
My main point has been that we should extend a similar set of considerations to theories with high EO/low BI. But in order to do this we must try to keep firmly in mind what the central targets of explanation are. This is why Plato’s and Darwin’s problems are so important. Yes, they are vague and need precisfication. But they are not powerless and can be used to evaluate proposals. They, in other words, provide a second axis of evaluation in addition to “data coverage” for theory evaluation.
Two last points: First, there are other factors in addition to the two mooted above that serve as marks of truth: simplicity, elegance, Occamite stuff, etc. These have served to advance thinking and are very hard to make precise, hence the injunction to “use your noodle.” Second, arguments gain in persuasiveness the more local they are. These dicta, though very vague in the abstract are remarkably clear when applied in local circumstances, at least most of the time. As Alex Drummond has repeatedly noted: it’s often all too easy to recognize a counter-example/bit of recalcitrant relevant data when it comes flying your way. Same with EO, elegance, etc. In particular contexts, these high-flying notions get tied down and can become useful. The aim of lots of theoretical work is to figure out how to do this. However, not surprisingly, this is more art than technique, and hence the disagreements about what factor should weigh most heavily in deciding which way to turn. My view, is that in this hubbub we should not loose sight of the animating problems and discount them in favor of what is more easily at hand. Further, in my view, this is precisely what falsificationism encourages.