Tuesday, July 9, 2013

A further note on falsificationism


This note is spurred by some points made by Alex Clark (here):

Every scientific theory aims for truth. This is not the exclusive goal of falsificationism. However, finite beings that we are we cannot gaze directly at a theory and see if it is true or not. Hence we look for marks or signature properties of truth and falsity. Falsificationism puts great weight on one of these features and lesser weight on others. The first virtue, and not first among equals, is "getting the facts right." Non-naive falsificationists (e.g. Lakatos) then gussy this all up to the point that the method turns out to be "do the best you can realizing that it's a complex mess." This, of course, is a nice version of Feyerabend's "anything goes" but said more politely. I personally like Percy Bridgeman's version: "use your noodle and no holds bared." The net effect of these methodological nostrums is to make them impossible to apply qua method for there is nothing methodical about them. Given the (apparently) insatiable desire for mechanistic answers to complex issues, people fall back onto "covering the data" as the best applicable proxy. And this is where the problem arises.

The problem is two fold: (i) what the relevant data are is not self evident, (ii) this leads to ignoring all the stuff that makes the non-naïve approaches non-naïve. So, in practice, what we get is a ham handed application of the sophisticated views that, in practice, is just our old friend naïve falsificationism.

Let me say a quick word about (i).  Naivety often begins with a robust sense of what the data is.  As the sophisticates know, this is hardly self-evident. So take an example close to the home of some: does Kobele’s work demonstrate once and for all that natural language grammars are not mildly context sensitive because they fail to display constant growth? I have not witnessed mass recantations of the MCS view from my computational colleagues. No, what I have seen is attempts (actually more often hopes that some attempts would be forthcoming) to reanalyze the morphological and syntactic data (pronounced copies) so that it is defanged. 

Let me add that I am entirely sympathetic to this effort in principle. It’s what we do all the time and it is one way that we try to protect our favored accounts.  We do this not (merely) out of a misplaced love of our own creations (but who doesn’t love her own mental products?) but because often our favorite theories have EO that would be lost were it rendered false. Moreover, the more that would be lost the more rational it is to ask for a high level of proof before we give up on the old theory, and even then we will demand of the replacement that it get the previous theories successes right (usually as limit cases). 

My main point has been that we should extend a similar set of considerations to theories with high EO/low BI.  But in order to do this we must try to keep firmly in mind what the central targets of explanation are. This is why Plato’s and Darwin’s problems are so important. Yes, they are vague and need precisfication. But they are not powerless and can be used to evaluate proposals. They, in other words, provide a second axis of evaluation in addition to “data coverage” for theory evaluation.

Two last points: First, there are other factors in addition to the two mooted above that serve as marks of truth: simplicity, elegance, Occamite stuff, etc.  These have served to advance thinking and are very hard to make precise, hence the injunction to “use your noodle.” Second, arguments gain in persuasiveness the more local they are.  These dicta, though very vague in the abstract are remarkably clear when applied in local circumstances, at least most of the time.  As Alex Drummond has repeatedly noted: it’s often all too easy to recognize a counter-example/bit of recalcitrant relevant data when it comes flying your way. Same with EO, elegance, etc.  In particular contexts, these high-flying notions get tied down and can become useful. The aim of lots of theoretical work is to figure out how to do this. However, not surprisingly, this is more art than technique, and hence the disagreements about what factor should weigh most heavily in deciding which way to turn. My view, is that in this hubbub we should not loose sight of the animating problems and discount them in favor of what is more easily at hand. Further, in my view, this is precisely what falsificationism encourages.

11 comments:

  1. There's an ambiguity/vagueness/unclarity here which bothers me, which is whether "getting the facts right" does or does not include getting the 'significant generalizations'. For example, have you got the facts of English right if your phrase-structure rules work, but there are two versions of the NP rule, one for singulars, one for plurals?

    I think this is a significant issue because getting the generalizations is a rather more demanding task than getting the brute facts, and hard questions come up as to what the generalizations really are. E.g. is the resistance to agreement in Icelandic of dative, accusative and genitive subjects one fact or three?

    ReplyDelete
  2. I've been vague, but you are right. Generalizations are a big deal. If an account cannot handle a well established generalization then it is more problematic than being unable to accommodate a stray un/acceptable sentence or two. But I am not sure where you are going with this.

    ReplyDelete
  3. Not necessarily anywhere in particular, but it seems to me to be something very important, which people pay considerable heed to in inexplicit practice, but don't talk about very much, so perhaps it would be good to have a story about where it fits in, especially because as a goal, it is somewhere in between brute accounting for the facts and the higher levels of explanation that many people are uncomfortable about.

    For example, even if the double central embeddings such as my Plato example turn out to provide a processing issue for at least some people (some messing around with an adapted version for Md Greek got mixed results on the Greek Linguistics fb group), the requirement of capturing the generalizations of the single centrally embedded one is still a substantial problem for the advocates of flat structure.

    ReplyDelete
    Replies
    1. Couldn't agree more. What's made Generative Grammar so interesting has been the discovery of a couple of dozen non-trivial generalizations. I think that explaining these, where they come from and why they arise, as the main avenue for a deep understanding of FL/UG. I have made myself somewhat of a pest for insisting on this with my minimalist colleagues. So yes, yes, yes. Generalizations rule! (Or should).

      Delete
  4. I think it's incorrect to say that "every scientific theory aims for truth." As you note, we wouldn't know truth if we saw it, anyway. In describing science as being a search for indirect indicators of truth (or falsity), and in describing (roughly) explanation and empirical coverage as two dimensions of theory appraisal, I think you're getting at the mechanics of theory choice more directly (and more accurately).

    Based on what I've read in philosophy of science (mostly essays and books by Larry Laudan), it seems to me that there are more than two dimensions along which theories are evaluated, some of which have to do with how theories relate to data, and some of which with how theories relate to other theories and to themselves.

    Perhaps echoing and expanding on Avery's point, "getting the facts right" itself consists of multiple evaluative dimensions (e.g., the range of data accounted for, the accuracy with which data are accounted for, whether a theory accounts only for data it was designed to account for or if it also accounts for other data [i.e., makes surprising predictions]). The more purely theoretical concerns are similarly multidimensional (e.g., internal consistency, consistency with 'neighboring' theories, fecundity, simplicity).

    So, while some of these may bear more or less direct relationships to truth (or, more to the point, if a theory is true, it may be more or less likely to exhibit certain of these properties), and while any given scientist may well believe that their favored theory is true (or non-boring, or elegant), what matters in convincing other scientists of the value of a theory are these more objectively defined criteria.

    I think this is all consistent with your argument against naive falsificationism, which is to say that we can still make the case that this position puts undue emphasis on a small subset of our evaluative criteria.

    ReplyDelete
  5. I agree with a lot of what you say here: but I guess I don't see Plato's problem and Darwin's problem as being separate nonempirical issues.
    It is an empirical fact that children learn/acquire languages, and that the LAD that they use to do the acquisition task evolved in some way.
    So these are facts that need to be explained, just as syntacticians might want to explain some island effect.
    Suppose we have theory A which explains lots of syntactic facts but has no theory of learning and theory B which also explains lots of syntactic facts and explains acquisition facts but has a very rich UG with no theory of how that might have evolved, and theory C which is a small UG theory but which has no explanation of the cross-linguistic variation in island effects for example, but has an acquisition theory and a plausible theory of evolution, then all three theories have some *empirical* problems. They fail to account for some of the facts, and which one of them you favour depends on which you think the important facts are.

    So if I understand your argument here, EO refers to the nonlinguistic facts like the acquisition fact and the evolution fact. But I think these are part of the data that need to be "covered" just as the syntactic data do.

    And I don't think that falsificationism per se favours one type of fact over another.

    ReplyDelete
    Replies
    1. Speaking carefully, I believe you are right. They are just facts, like any other. However, as a practical matter, I think it pays to separate them out for special status as they are the "large" facts that "smaller" facts are assembled to understand. I know that these distinctions are not precise. But in the realm of methodology, there is no precision worth having. These are general rules of thumb, attitudes, mores. They are squishy, but nonetheless play an important and constant role.

      Delete
    2. But then your objection is really not anything to do with falsificationism, but rather with the selection of which facts are important or not. I completely agree with you, (if this is what you are saying! I have prior form here .. ) that the large facts are more important than the small ones.

      In other words, one can be a falsificationist and interested in the large facts (me), or a non falsificationist and interested in the large facts (you) or a non falsificationist interested in the small facts (some of the other commentators here) and so on.

      Delete
    3. Let me edit that a little, because though I think there is something very right about falsificationism, even in its more sophisticated forms it doesn't quite capture the reality of scientific methodology which is as you say a bit fuzzier. For me, it's more about making testable predictions than it is about the falsification/confirmation debate.

      My point is just that the choice of scientific methodology is largely independent of/orthogonal to your choice of what facts you think are most important.

      And however much I might disagree with you about the importance of precision, and the value of GB, I do agree with you about which the important large facts are, and the importance of deciding what the important questions are.

      Delete
    4. That's enough for me. The facts that people tend to notice re the small ones. Falsificationism, I believe, orients attention to these. If one agrees that the big facts count as much s these, then I am fine.

      Delete