Tuesday, July 2, 2013

Falsifiability


Minimalism induces falsifiability anxiety in otherwise unflappable people.  I noticed this while auditing a class here at the LSA summer institute (one of the undeniable perks of being on the faculty here): how could one show that Minimalism was false? Is there anything that would show that it is incorrect?  I’m not sure there is, but then I’m not sure that any interesting scientific proposal can be shown to be false. Let me explain.

Scientific theories are very complex objects. Even as regards more mature sciences like physics and chemistry, philosophers of science have long recognized that Popper’s simple version of falsificationism is an inadequate methodological credo. What makes theories hard to refute?  Well, mainly the fact that there is quite a distance between the central concepts that animate a theory, the particular models that incarnate them and the “facts” that test them. Lakatos talked about central belts versus auxilliary hypotheses (Cartwright, a favorite of mine on these topics has good descriptions of how elaborate the testing process in physics can be and how wide is the gap between theory and experiment see here), but now pretty much any account of how the rubber of theory hits the road of experiment highlights the subtle complexities that allow them to make contact. This said, scientists can and do find evidence against particular models (specific combos of theory, auxiliary hypotheses, and experimental set-ups), but how this bears on the higher level theory is a tricky affair precisely because it is never the theory of interest alone that confronts the empirical jury. In other words, when something goes wrong (i.e. when an experiment delivers up evidence that is contrary to the deductions of the model) it is almost always possible to save the day by tinkering with the auxiliary hypotheses (or the details of the experimental set-up or the right description of the “facts”) and leave the basic theory intact. 

The recent discovery of the Higg’s particle offers a fair illustration of this logic.  There was some discussion before the fact of where (i.e. which energy range) to look to find the Higg’s.  The one discovered was one in the lower possible energy ranges. Say the Cernistas had not found anything where they first looked. Would this have falsified the Standard Theory? Nope, there were a whole bunch of other candidates to explore (some at energies that the facility would have strained to achieve). And say that even these proved to be duds, what would have been the rational strategy?  Dump the Standard Theory and assume that it was completely off track?  Maybe for the young and the daring hoping to make their mark in the world, but my hunch is that this would have been chalked up as a puzzle to be explored and explained away until something better came along, rather than as an indication that the whole edifice is rotten and must be thrown out wholesale. Why? Because the reason that people adopt theories is that they do work, and the work that the Standard theory did, even had the Higg’s been left undiscovered, would still remain. Yes, there would be problems, and yes it would be nice to explain these “anomalies” away, but the theory would not have been dumped. There would likely have been ad hoc patches proposed to salve the disappointment and allow work to continue apace.  As Hilary Putnam once observed, ‘ad hoc’ does mean ‘to the point’ and a nice clean local fix would have served quite nicely, I am sure.

So does this mean that theories are not regulated by “the facts”? No. Modulo all the caveats about how facts need massaging to be relevant targets of explanation, empirical success of course plays a role, but the role is not that of falsification. Rather, the facts serve a useful function when they are recruited to distinguish otherwise viable alternatives.  And this is where falsificationism really misleads.  I don’t know about you, but I find it hard to take most of my concocted explanations seriously because they are so obviously inadequate from the get-go. In other words, a candidate theory’s main problem initially concerns not falsification but verification. The pressing and relevant question is not whether there is counter-evidence but whether there is any interesting evidence in its favor!  Most theories plop stillborn from the mind.  Only a few are worth taking seriously. What makes them worth taking seriously? There are interesting facts they would explain were they true (note the ‘interesting’ here: some facts really are more interesting than others, but this is for another time). The first part of any sane research strategy is to find places where the account works, for when one starts there are all too many indications of failure all too quickly and so one needs reasons for taking the hypothesis seriously and the most immediate concern is not whether there are problems with one’s account (of course there are) but what it would buy you were the theory (roughly) on the right track. So, the very first thing one does (again, if one is sane) is to find factual life support for one’s tender creation, nurturing it via verification, and looking for evidence in its favor. In other words, unless one is an enthusiastic masochist, the last thing one does in the initial stages of theory development is look for reasons to discard your newborn proposal. 

Does this mean that looking for contrary evidence is unimportant? No. It is important, but mainly in service of verification. Here’s what I mean. The best kind of evidence in favor of a proposal is the verification of a counter intuitive prediction (especially one that is problematic given current assumptions). So, for example, a very strong argument in favor of Copernicus’ account of a heliocentric solar system is that it predicts the possibility of retrograde planetary motion (viz. that planets rather than moving smoothly forward around the night sky would look like they reversed gear for a while before reversing gear again and moving forward). If Copernicus was right (as we now think he was) then were you to calculate planetary motion using Earth as the center then what you expect to find is the appearance of retrograde motion. Moreover, this motion would “disappear” once one did the calculations using the Sun as center.  So rather than being a problem as it was for the Ptolemaic conceptions, apparent (when viewed from Earth) retrograde planetary motion was predicted. This served to corral an otherwise rather unpleasant anomaly and so was strong evidence in favor of Copernicus’.

Other examples abound: bending light rays, perihelion of mercury, shrinking rulers and slowing clocks, quantum tunneling, “spooky” action at a distance (aka entanglement), the tides, colors in white light, backwards control (hehe!!) a.o.  So, yes looking for empirical trouble is part and parcel of the good theorists armamentarium but mainly in service of finding strong verification.  The strongest evidence in favor of an account lies with the surprising (counterintuitive) predictions that it makes, that turn out to hold. That’s the main reason to go falsificationist and chase potential heartbreak! It’s strategic: new theories need to gain a hearing and the best way to do this is to find a wild unexpected prediction that pans out. So, the smart theorist looks for ways to falsify her account in order to find those that pan out. In other words, the big game here is not the false results, but the predictions that work. If any do, then the theory has earned the right to be taken seriously and then the next stage of serious work begins.

With this as background, let’s return to minimalist syntax.

As with all other theory, minimalism has leading ideas and executions of such.  Chomsky likes to talk of the Minimalist Program. The way I see this is a series of basic conceptions (merge, Probe-Goal, phase locality, minimality, Extension, etc.) that can be packaged in different ways to produce varying minimalist theories or models (a personal favorite e.g.: one can think of control as a Probe-Goal effect with PRO the goal of a higher functional probe or one can think of PRO as the trace like residue of internal merge). These theories are then explored by finding how well they fit the “established” facts (e.g. re control: do they derive the distribution and interpretation of control sentences) and what novel predictions (the more surprising the better) they make (e.g. do they allow for the possibility of backwards control). Models will accrete successes and failures and will be judged over a certain period winning fans and detractors. The success of the program will be a function of the theoretical and empirical suasive powers of the particular theories. None of this is novel to linguistics, nor should it be.

How then does a theory fail, linguistic or otherwise? Actually, mainly by running out of steam. Boredom is more deadly than a couple of false data points. Theories can run out of explanatory steam or, worse, never really develop any.  Such theories and their attendant programs are abandoned. So there is something worse than being wrong, at least if you are a theory, and that’s being BORING! If right, this has a useful practical consequence: if correct, then the Minimalist Program has a very long and bright future, for boring it ain’t!

42 comments:

  1. Truly fascinating, I consider myself privileged to witness the long awaited fifth stage in philosophy of science. After Popper's falsificationism, Kuhn's paradigm shifting, Lakatos' 'research programming', and Feyerabend's "Against Method" we now have Hornstein's "Against Boredom". I shall return to this great achievement momentarily.

    First a couple of dull observations:

    1. Scientific theories are complex beasts and much of what is predicted to exist is not that easily confirmed [if it were we would not need theories we'd just see the stuff like Higgs particles or planetary orbits]. So its okay if a theory predicts a lot for which we have no empirical confirmation yet ,and if there are phenomena that seem to contradict the theory - just be patient and listen to the wise theorizer.

    2. Don't look for falsification but for confirmation. This is an important advance from Chomsky's criterion for progress: “Suppose that counterevidence is discovered as we should expect and as we should in fact hope, since precisely this eventuality will offer the possibility of a deeper understanding of the real principles involved” (Chomsky, 1982, 76). Glad to learn we have moved on from this...

    Now lets return to the science trail-blazing boredom criterion. I know some excellent linguists who find it quite boring to talk about 'biological foundations of language' and think work on such topics should be left to professional biologists or psychologists. At least some [maybe all] biolinguists disagree. They do not think work on [or even better theorizing about] the biological language faculty are boring. And, at least some of them seem to consider boring "the whole mass of data that interests the linguist who wants to work on a particular language" (Chomsky, 2012, 84) and just want to abstract away from it.

    Now, someone as unimaginative as myself wonders: who gets to decide what is and is not boring? Or is it up to every linguist to study what s/he considers not boring, regardless of what 'the rest of the field' does? And if 'false data points' do not matter much, why would one get all excited when some people make claims about recursion one finds disagreeable? Even write papers entitled 'Recursive misrepresentations'? The tone sounded quite furious to me but I missed entirely, that Levinson [2013] was accused of being BORING....

    In fact he was accused of such trivial things like as not getting the facts right: “Far be it from us to condemn speculation in linguistics. … We do believe, however, that a speculation … if advanced on the basis of misrepresentations, mischaracterizations and confusion about basic issues, is not off to a good start” (Legate et al., 2013, 12)

    So: is there more besides "Against Boredom" that you have not been telling us?

    ReplyDelete
    Replies
    1. In your remarks about Legate et al. (2013), I think you've managed to confuse the proposition "these data falsify the theory" with the proposition "these data are false". The former is the topic of Norbert's posting, the latter is not.

      Delete
    2. Thank you for the kind suggestion. However, I think you under-appreciate the revolutionary character of Norbert's proposal. He sums it up nicely in the final paragraph:

      How then does a theory fail, linguistic or otherwise? Actually, mainly by running out of steam. Boredom is more deadly than a couple of false data points. Theories can run out of explanatory steam or, worse, never really develop any. Such theories and their attendant programs are abandoned. So there is something worse than being wrong, at least if you are a theory, and that’s being BORING!

      A theory, even a false one is okay, as long as it is not BORING. If this is so, then the distinction between the proposition "these data falsify the theory" and the proposition "these data are false" evaporates. We embrace false theories, so we need not worry anymore about data that potentially falsify the theory, far less about data that are merely false. Applied to L13: If Levinson has a non-boring theory, even a false one, who cares if his data are wrong?

      I am a bit surprised you would still defend the outdated distinction you make. Chomsky has advocated the Galilean style for over a decade. Clearly, that style is based on Feyerabend's methodological anarchism. We don't just haphazardly replace a method here and there – when it comes to method, the new slogan is "anything goes". And now, after the Hornsteinian revolution, we embrace full-blown data-anarchism: as long as it fights boredom anything goes.

      Delete
    3. Uh no, that's not what Norbert wrote. But whatever.

      Delete
    4. Actually "that's" a direct quote:

      "How then does a theory fail, linguistic or otherwise? Actually, mainly by running out of steam. Boredom is more deadly than a couple of false data points. Theories can run out of explanatory steam or, worse, never really develop any. Such theories and their attendant programs are abandoned. So there is something worse than being wrong, at least if you are a theory, and that’s being BORING!"

      So it IS exactly what Norbert said, caps, exclamation mark, and all.

      Maybe you are suggesting "Against Boredom" requires the kind of creative citation style Chomsky is so skilled at? Like his famous claim about a passage in Terry Deacon's (1997) "The symbolic species", a book containing a 100+ page section "Brain": "Whatever the meaning may be, the conclusion seems to be that it is an error to investigate the brain” [Chomsky, 2002, p. 83]. This astounding claim alone catapults "On Nature and Language" right off any 'Anti-Boredom' meter, because all the detailed brain science stuff Deacon was going on and on about was sure to fool his readers and it took a true genius to suggest it was all just a decoy...

      Delete
    5. Try to replace "boring" with "not inspiring".

      Delete
    6. Christina, i'm afraid that your theory fails to meet Norbert's boredom criterion.

      Delete
    7. Oh, finally we are getting somewhere: Norbert is the one who gets to decide what is and isn't boring. What took you so long to answer a question I asked days ago?

      And, BTW, I never proposed any general "my theory" - so obviously "my theory" cannot meet Norbert's [or anyone's] criteria. There may be some detailed and specific theories that I would defend, but... [It took a while but the rest of us has finally caught up with this ingenious Chomskyan arguing device]

      Delete
    8. "boring" means "can't provide further insights" hence "the running out of steam". Such theories should be abandoned. What exactly is faulty here? You are beating up a strawman.

      Delete
    9. But Christina, even your "direct quote" doesn't say what you claim it says.

      Delete
    10. reply to David:

      Forgive me but the referent of your 'that's' was not clear. Now that we are clear that you claim what I wrote AFTER quoting Norbert does not follow, let me reply: It is quite possible that the conclusions i showed to follow when one takes Norbert seriously, were not what Norbert had intended. In case you suggest my reasoning was faulty please show exactly where i deviated from what follows logically from the direct quote.

      It will be helpful to ALSO show how what Chomsky says below follows from the direct quote by Margaret Boden [which as we both know he took out of context]

      "To begin with, Boden does not seem to comprehend the terms she uses. Thus she refers repeatedly to my "postulation of universal grammar" (UG) and writes "What universal grammar will turn out to be -- if it exists at all -- is still unclear." UG is the term that has been used for many decades to refer to the theory of the genetic component of the human language faculty, whatever it will turn out to be .... To question the existence of UG, as she does, is to take one of two positions: (1) there is no genetic component; (2) there is one, but there is no theory of it. We can presumably dismiss (2), so Boden is left with (1). She is therefore questioning the existence of a genetic factor that played a role in my granddaughter's having reflexively identified some part of the data to which she was exposed as language-related, and then proceeding to acquire knowledge of a language, while her pet kitten (chimp, songbird, etc.), with exactly the same experience, can never even take the first step, let alone the following ones. It is either a miracle, or there is a genetic factor involved. Boden's suggestion -- presumably unwitting -- is that it may be a miracle." [Chomsky, 2007]

      Can you enlighten us where Boden denies there are genetic differences between Chomsky's grand-daughter and kittens, songbirds, and chimps?

      Delete
    11. I would like to thank everyone who offered advice re terms I could use as replacement for 'boredom'. That was very kind of you but misses entirely the point I am making [apparently only David 'clued in'].

      See, when generativists like Legate et al. criticize non-generativists like Levinson it is not because Levinson's theory is boring or 'not inspiring' [it certainly seems to inspire him] or 'has run out of steam' [seems he is just getting started]. What Legate et al. criticize is that [according to them] Levinson' 'speculation' is "advanced on the basis of misrepresentations, mischaracterizations and confusion about basic issues" [p.12]. THAT is what makes it worthy of criticism. [For the record: I fully agree, IF Levinson is guilty as charged that IS a bad thing].

      Now some of you may not have read all the wonderful posts Norbert has provided over the past 10 months. I encourage you to check out October 16, 2012. Here he educates us how the science game is played: http://facultyoflanguage.blogspot.ca/2012/10/how-to-play-game.html

      Among other things Norbert tells us that when you are a SCIENTIST, S, and want to convince us that some theory T [which accounts for phenomenon P] is wrong, "if you want to play the explanation game, the “science” game, then you are obliged ... to explain why you think the assumptions are faulty and (usually, though there are some exceptions) you are obliged to offer an (at least sketchy) non-trivial question begging account of P. S cannot simply note that s/he thinks that T is really really wrong, or that T is unappealing and makes her/him feel ill"

      Being boring probably would be the kind of thing that makes Norbert feel ill. But, on October 16, he said that's not enough to toss out a theory. And he cites David Adger who is slamming Tomasello for not playing the science game:

      "…CxG [Construction Grammar, NH] proponents have to provide a theory of how learning takes place so as to give rise to a constructional hierarchy, but even book length studies on this, such as Tomasello (2003), provide no theory beyond analogy combined with vague pragmatic principles."

      This criticism has nothing to do with 'boring' or ‘running out of steam’ [the Tomasello lab easily outlasts an army of energizer bunnies]. The allegation is that no scientific theory is offered, just analogies and vague pragmatic principles.

      So what my concern boils down to is this: Do generativists apply two different standards? I would hope not but from studying Norbert’s blog it seems that generativist's work is good as long as it meets Norbert's "Against Boredom" criterion. But the theories of others are judged by "Play by the rules of the Science game"





      Delete
    12. @Sveid:

      This is a reply to Sveid who says:
      ""boring" means "can't provide further insights" hence "the running out of steam". Such theories should be abandoned. What exactly is faulty here?"

      I don't really see this line of argument as correct. Theories should be abandoned if they are "false". So for example in maths (admittedly not an empirical science) people abandon the study of particular mathematical areas because they no longer find them interesting or important, but the theory developed up to that point is still correct. Similarly in empirical science, we may abandon the study of, say, a particular organism, e.g. smallpox, for various reasons like nobody having smallpox any more, but that doesn't mean that the facts that we have discovered up to that point are now false.
      So we all agree with that, but
      Norbert's point is something completely different, and more controversial. He means "abandon" in the sense of "now think to be false". And that argument just seems wrong (I guess he is appealing to Lakatosian degenerating research paradigms, which is a different argument).

      Delete
    13. Norbert meant 'abandoned' when he wrote 'abandoned.' People stop working on,(investigating the properties of, developing refinements to, testing the consequences of) theories that they've milked so much that they no longer find them interesting. The return in insight is not worth the effort anymore. In my view, this is what happened with GB, for example: it ran out of steam. Now, as you know, I DO NOT think that this means that GB is/was false. That's too crude. Indeed, I think that GB was roughly right, however, it is not fundamental. It is a good "effective theory" in search of a more fundamental one. But abandoned it has been by theorists. And I can understand why: it stopped yielding insight and its questions were no longer tantalizing. Furthermore, when a new theory beckons, the first thing you do is NOT look for flaws, In this context falsification is a silly strategy. In fact, it is a silly strategy until the theory is pretty mature (and about then it is becoming boring), and not one that I think that any sane person pursues. The Popperazzi (Leonard Suskind's term) beg to differ. Falsification is their holy grail. I disagree, in many (most) contexts it is a form of abuse (generally directed at other's proposals I have found).

      Last point: it's a throw away line to say that one is "pursuing truth." Duh. Sadly, however, we can't know truth directly. So we look for the MARKS of truth. Some think that the main mark of truth is covering (yet more) data points. Some think its more complicated than that. I am in the latter camp and one of the more salient marks of truth, one of the features that make a proposal worth pursuing and developing is that it is interesting (a context sensitive value) aka not boring. This is partly a matter of taste, I've found for damn it if some individuals are drawn to boring like moths are to a flame. But, taste, unlike technique cannot be taught and that's too bad. However, 'interesting' generally means provides explanatory insight. Theories do run out of this and are for that reason abandoned. Does that mean they are false? No. They are boring and not worth further effort.

      Delete
    14. Thank you for the clarification. Norbert. i have just one question. You write:

      "The Popperazzi (Leonard Suskind's term) beg to differ. Falsification is their holy grail. I disagree, in many (most) contexts it is a form of abuse (generally directed at other's proposals I have found)."

      This is a pretty serious accusation. Can you offer a few examples of linguists who have directed the 'holy grail of falisfiability' in an abusive manner at other's proposals? You speak of many contexts so lets say 5 examples. Thanks.

      Delete
    15. "Norbert meant 'abandoned' when he wrote 'abandoned.'"

      I was actually scratching my head wondering where exactly did you say or imply anything other than that. It's a *ahem* frequent phenomenon these days.

      Delete
    16. Sorry for the misinterpretation, I clearly did get the wrong end of the stick. But now I am more confused. Let me try again shortly once my ideas are straight.

      Delete
    17. reply to Alex: It is not surprising that you would be confused. Just this year Chomsky himself stated that the study of language should keep to the standard norms of science:

      "In recent years, work on these topics has often been called ‘‘the minimalist program (MP).’’ The term has been misunderstood. The program is simply a continuation of the efforts from the origins of the generative enterprise to reduce the postulated richness of UG, to discover its actual nature (see Freidin and Vergnaud, 2001). The literature contains many criticisms of the MP, including alleged refutations, charges that it is not truly minimalist, and so on. None of this makes any sense. Research programs are useful or not, but they are not true or false. The program might be premature, it might be badly executed, but it is hard to see how it could be fundamentally misguided, since it hardly goes beyond holding that the study of language should keep to standard norms of science." [Chomsky, 2013, 38].

      Of course most of us associate falsifiability of proposals as being one of these standard norms. Norbert calls [demand for] falsification a form of abuse. So minimalist standard norms seem to differ from what the rest of us thinks the standard norms are.

      Delete
    18. I guess the distinction that confused me is between the sociological question of how theories actually change in linguistics, with the normative question of how they should change so that they lead to theories that are ultimately correct. So sociologically, in linguistics, it is correct that people moved away from GB not because of empirical problems but because of other sociological factors ('boredom', say).
      But, I am reminded that this does not mean that GB is false, as abandoning a theory is not the same as thinking it is false.

      But I don't understand the relationship in this argument between being correct and being interesting. So if we are interested in finding correct theories (I am) then we need some argument that correct theories are interesting, but of course there isn't (and can't be) one, since whether a theory is correct or not is ultimately an objective fact whereas whether it is interesting is a matter of taste.
      Norbert argues it both ways, that being interesting is a salient mark of truth, but also that being boring doesn't mean that it is false.

      Anyway I see Norbert's tongue firmly in his cheek here..
      and I agree with the attack on naive falsificationism even if I don't buy this particular flavour of Lakatosian analysis.

      Delete
    19. I think in the post Norbert made a stronger claim: "So there is something worse than being wrong, at least if you are a theory, and that’s being BORING!"

      I read this as meaning if I have the choice between 2 theories, one is Wrong [W] and the other is Boring [B] I should go with W [and abandon B]. Now B seems neutral between W and T [true] but if you add what Norbert said later, it would seem he had mainly true but boring theories in mind for B.

      As far as the relationship between being correct and being interesting is concerned - you're not the only one having a difficult time with that one. Apparently at least some leading linguists do as well:

      "... it is somewhat puzzling that as scientists we would have a serious notion of what would be more interesting than the truth. For instance, it would definitely be more interesting to discover that the moon is made almost entirely of green cheese than that it is made of rock and dust, especially given that it looks like it is made of rocks and dust, and the samples that have been brought back are—rocks and dust! It would be more interesting to learn that pigs cannot fly because their wings are made of an invisible substance that is too insubstantial to support their weight, rather than that they simply lack the anatomical and physiological wherewithal in the first place. ..... But granting that the less interesting explanations are the right ones, scientists do not give up the good fight and turn to other pursuits. Why should linguists?" [Culicover, 2004, 134]

      I think Peter asks an excellent question; Just WHY should linguists give up on that "the good fight"? Now I could not agree more with Alex D. who remarked yesterday; "There's no point is linguists/philosophers discussing string theory on a linguistics blog. None of us have anything to say about it." So, it would be great if the answer [if one is forthcoming] would focus on linguistic considerations alone.

      Delete
  2. I wonder, is the problem with falsifiability really that "scientific theories are complex objects" but more a conflict between Bayesian epistemology (i.e., that one can observe evidence, but not truth) and Popper's (to me, rather incoherent) stance that scientific data is somehow equivalent to truth? If you buy this (and I'd love to believe you are a died-in-the-wool Bayesian!), then the idea that scientists should be concerned with "confirming" rather than "falsifying" theories is a somewhat strange one to try to make. From the Bayesian perspective, evidence always is "confirming" one set of theories and "falsifying" others since evidence can only be interpreted in light of theories. In fact, your invokation of "surprisal" (a crypto-Bayesian term for likelihood if I've ever heard one;)) seems to suggest that you yourself are interested in "falsification"-- of competing theories (to be clear: my reading of "surprisal" is "observing an event that should have extremely low probability *according to some theory*).

    ReplyDelete
  3. I have no problem with the basics of Bayes (I started life in the Columbia University Philo dept and was influenced on these topics by Isaac Levi (c.f. his Gambling With Truth)). What I find off about the Bayesian approach is the idealization. Here's what I mean. Phenomenologically, one is not pitting theories in a well defined space of options against one another. Rather the space itself is very patchy and one is using data to , as it were, construct the space of alternatives. The hard problem is knowing what to compare, not having options and using the data to sift them apart. There is not method for this process of constructing the space of relevant options, not even a Bayesian one (which reduces this very inchoate process to a far too mechanical procedure). Bayes works well when the alternatives are demarcated. It strikes me as missing the crux of the epistemological problems as it starts by assuming away the hard part: what's worth taking seriously? What do the real alternatives look like? This is the hard problem, and this is where falsificationism misleads. So, in the ideal circumstance, Bayes is fine (at least for me), however, we are almost always far away from the ideal when one is on the frontiers of research and so the Bayesian dicta only apply very very loosely. This said, sure, surprisal reflects the obvious pre-bayesian idea that evidence is strongest when unexpected.

    ReplyDelete
  4. Just a tiny clarification. Lakatos's contrast was between the "hard core" of the research program versus the "protective belt" of the auxiliary hypotheses. The idea being (as you say) that in response to recalcitrant data you could (almost) always change your auxiliary hypothesis rather than give up a core tenet.

    ReplyDelete
    Replies
    1. Indeed but Lakatos' has of course been criticized by Feyerabend:

      "Lakatos realized and admitted that the existing standards of rationality, standards of logic included, were too restrictive and would have hindered science had they been applied with determination. He therefore permitted the scientist to violate them (he admits that science is not "rational" in the sense of these standards). However, he demanded that research programmes show certain features in the long run — they must be progressive.... I have argued that this demand no longer restricts scientific practice. Any development agrees with it" .[Feyerabend, 1978]

      That seems more in the spirit that Norbert promotes in his post, as long as it meets the 'not boring' criterion

      Delete
  5. This is a somewhat-vague hunch, so I'm very willing to be told that I'm wrong, but it seems to me that the distinction between the "hard core" of a research program and the "protective belt" of auxiliary hypotheses might have something to do with the frequent disagreement about the role of formalisation. I usually find myself somewhere in between the two positions that are staked out in that debate (in the instantiation on this blog a few weeks ago, roughly Alex C. on one side and Norbert and David P. on the other).

    I don't want to put words into anyone's mouth, but my understanding is that what the pro-formalisation side often means to encourage is the practice of formalising particular combinations of core-idea-plus-auxiliary-hypotheses, and I wonder if perhaps those who disagree see this as eliminating an important distinction. In particular, one might worry that the auxiliary hypotheses will come to be seen as part of the core, and that some counter-example will be perceived as a strike against a core idea (e.g. that natural languages involve movement) when actually it really only counts against a particular auxiliary hypothesis (e.g. about exactly what the target position of wh-movement is, or whatever). I notice, for example, that Norbert uses the term "theory" to mean the core and uses the term "model" for core-plus-auxiliary-hypotheses, and on this usage perhaps a pro-formalisation person who asks for "formalised theories" rather than "formalised models" could give the impression that everything in the formalisation is to be considered part of the core, so that everything in the formalisation should stand or fall together as one monolithic object. But I don't think that is what the pro-formalisation side really means to suggest.

    It's true that it's usually not explicitly "written into" a formal system which pieces of mathematical machinery comprise the core and which parts comprise the auxiliary hypotheses, and this does sometimes seem to worry those who respond to the pro-formalisation argument. But it's still generally possible to work out, when an incorrect prediction emerges, whether it's a core idea or an auxiliary hypothesis which is "at fault". In other words, when you put together a new formalised system of core-idea-plus-auxiliary-hypotheses in order to try to accommodate new facts, it's usually easy to tell whether you've made a change to the core idea or merely to some of the auxiliary hypotheses. So in a sense it is true that formalisation can eliminate the distinction between core idea and auxiliary hypotheses, in that a particular set of equations or whatever doesn't itself draw the dividing line, but this doesn't prevent the scientist from maintaining that distinction, and responding in the appropriately subtle not-naive-falsificationist ways when the formalised system reveals an incorrect prediction. The ideas we formalise needn't be only those core ones to which we are relatively strongly committed.

    Also, there's obviously a difference between
    (a) thinking that a theory should be abandoned as soon as it is falsified (by a single observation), and
    (b) thinking that a theory is more valuable (all else being equal) if it meets the criterion of falsifiability.
    The "pro-formalisation" argument has nothing at all to do with (a), as far as I can tell.
    It does require that we accept (b), I think (or at least, the argument is stronger if we accept (b)). But (b) does seem to be generally accepted among "Chomskyan linguists" (e.g. I think that's what Chomsky is getting at when he says that we should hope to find counter-examples).

    To end speculatively and provocatively (and optimistically perhaps): is there any hope that some aspects of the formalisation debate might be resolved by clarifications of how the two sides treat the distinction between core ideas and auxiliary assumptions?

    ReplyDelete
    Replies
    1. So say we have a class of grammars/ theory of grammar G and a particular grammar E for English. Then presumably the core would be the class of grammars, and the auxiliary would be the particular grammar E. And if fully formalised one could say that the core+aux would be falsified by demonstrating that the grammar makes the wrong predictions about some particular English sentence. But this obviously wouldn't falsify the core. And if the core is not formalised, then it is hard to see how it could be falsified, in this case. So the combination of a lack of formalisation plus an interest in the deeper problems does seem to lead inevitably to theories which are not falsifiable.

      Delete
  6. Maybe. I think however the bigger problem/difference is that some of us don't think that it is very hard to find evidence against proposals even given the low levels of formalization. Alex made this point and I agree. As I said sometime earlier, I think that people mistake the value of formalization. It lies not in getting theories to be more falsifiable, but in better understanding how basic concepts interrelate. This is a BIG plus when it is doable. Formalization per se does not advance falsifiability as it is already all too easy.

    Yes, if a theory makes no in principle falsifiable claims, it's not good. But I know of almost no theories of this kind within the linguistics that I follow. They are not only falsifiable but have been falsified, if what we mean by this are either incomplete or in apparent contradiction with well accepted data. It's for this reason that I don't find the idea all that useful. It fails to engage what people actually do.

    ReplyDelete
    Replies
    1. When you say it's "already all too easy" to falsify a theory, do you mean that it's already easy to find some facts that any new proposal doesn't account for (i.e. there remain some "unsolved problems" that plague basically all theories), or that it's already easy to find facts that falsify new proposal X but which do not falsify existing alternative theory Y that X is competing against?

      In the first sense, I of course agree: no theory on the table at the moment is entirely descriptively adequate. But in the second sense, I am not so sure that it's always "all too easy". Mostly I suppose I have in mind cases where it seems that a "new" proposal is simply a notational variant of an existing proposal, which is not unheard of, and this seems like a situation where formalisation could really help.

      Delete
    2. I think I meant the first. But, at least at first blush, it is easy to find problems with a new proposal that do not *seem* (at first blush) to be problems for an older. I have found that this is often not the case, that indeed the older account often "gets the facts" in no more elegant or principled a fashion than the newcomer. Also, as you say, it may take time to sort out that we are dealing with notational variants. However, what I meant was the first. Does it matter?

      Delete
    3. I guess I was just trying to warn against inferring, solely from the abundance of the unsolved-problem kind of falsifiability, that there is plenty of "relative falsifiability", i.e. substantive differences among competing theories. My own feeling is that formalisation probably has more to offer in clarifying the substantive differences between one theory and its near-neighbours than in bringing out the big unsolved-problem kind of issues.

      For example (getting back to one of my favourite pet issues, which you and I have discussed at great length), I think the differences between a "copy theory of movement", or a theory with multidominance structures, or a theory with traces, are sometimes overstated. I'd argue that nothing at all follows from the switch from traces to copies in and of itself, although I suspect that this is not universally accepted and that formalisation could perhaps help to resolve this disagreement.

      Delete
    4. Agreed. I also think that the best place for formalization issues lies in investigating the underlying structure of the basic concepts. We used to do this a lot in philosophy and it helped clarify what you had in mind. I think there is a nive place for this in syntactic theory as well, as you know. So we seem, once again, to be on the same page. Yay!!!

      Delete
  7. The problem with this entire discussion is that, again, it is void of any specific examples that illustrate what is asserted. This technique [invented by Chomsky long time ago and perfected over the years] makes it possible to evade any criticism of one's views/proposals. David can claim "that's not what Norbert said' but when asked where i went wrong he refused to answer. I have asked repeatedly for examples of frivolous requests for formalization but none is forthcoming. So let me provide a specific example to illustrate what non-minimalists are concerned about. I know Norbert will not like this example - sorry about that, but then by now he had ample of time to provide his own:

    Proposal 1 [P1]

    FLN, “is the abstract linguistic computational system alone, independent of the other systems with which it interacts and interfaces...The CORE property of FLN is recursion... it takes a finite set of elements and yields a potentially infinite array of discrete expressions” (Hauser, Chomsky, & Fitch, 2002, p. 1571, my emphasis).

    Proposal 2 [P2}
    Fitch, Hauser and Chomsky (2005) argue, “the putative absence of obvious recursion in one of [the human] languages ... does not affect the argument that recursion is part of the human language faculty [because] ...our language faculty provides us with a toolkit for building languages, but not all languages use all the tools” (pp. 203-204), and they suggest that “the contents of FLN ... could possibly be empty, if empirical findings showed that none of the mechanisms involved are uniquely human or unique to language, and that only the way they are integrated is specific to human language” (Ibid., p. 181).

    P1 is a scientific proposal that can be falsified by empirical data. So when Everett [2005] came along claiming Piraha has no recursion defenders of P1 had two options:
    [i] accept Everett's empirical findings and abandon P1 given that it's CORE property claim had been falsified [and come up with P3 to account for new data].
    [ii] showing that Everett had made a mistake and that Piraha in fact has recursion

    We all know there was a good deal of [ii] going on and if Minimalists had left it at that we would be still doing science. But they did not and proposed P2. A core property was demoted to be 'one tool among others' and it was asserted that FLN can be empty. So P2 is no longer falsifiable by ANY empirical data anyone possibly could find. It asserts that even core properties do not have to be present in language L, and that FLN exists AND that it possibly can be empty. So no matter what anyone finds - P2 is unfalsifiable.

    Of course I COULD BE wrong. P2 could be falsifiable. But if so it would be of great help to show here and now HOW P2 could be falsified [formalized or not]

    ReplyDelete
    Replies
    1. if so it would be of great help to show here and now HOW P2 could be falsified

      P2 is your own construction, a collection of frankenquotes from random parts of Hauser, Chomsky & Fitch's various papers. However, to briefly reply to its disparate bits:

      The proposal that a particular language L might have Chomsky's rule of Merge without its recursive step is eminently falsifiable. Sentences in such a language would have maximally two words. See our paper, footnote 11, where this is discussed. Chomsky's Merge is binary, combining two elements at a time. If your favorite syntactic theory allows more than two elements to combine to form a syntactic constituent, then the proposal that this rule lacks the recursive step in some language L would not limit sentences to length 2, but the proposal would still be falsifiable insofar as more or less every test for constituency from any standard syntax textbook should fail in L.

      The proposal that the language faculty as a whole makes options available that not all languages use is a truism of linguistics (it's what we spend much of our time figuring out, in fact), and among your omitted bits is a set of examples provided by Fitch, Hauser and Chomsky to exemplify just this: for instance, the existence of languages with three-vowel systems even though languages are fully capable of using five-vowel systems etc. This truism could be falsified by showing that languages claimed to be different from, say, English, actually are not. So, for example, contrary to what I have always assumed to be true, I actually do speak and understand Navajo, and Hawaiian actually has as many vowels as English.

      Finally, the idea that FLN could be empty, discussed briefly by Fitch, Hauser & Chomsky, is not a sign that the proposal is unfalsifiable, but actually states one way in which their proposal could be shown false. They go on to tell theur readers what conclusion they would draw in such a circumstance, which is why they brought the matter up in the first place.

      Etc.

      Delete
    2. I may be getting the dialectical situation wrong (as I have before) but this seems backward to me.

      If the proposal P2 is roughly that every language has recursive merge, and C is asking how this could be falsified, and then D says "The proposal that a particular language L might have Chomsky's rule of Merge without its recursive step is eminently falsifiable. Sentences in such a language would have maximally two words.", then this is not a falsification, even if a language with only two words comes along.
      What is needed for a falsification is the converse: namely that every language *with* recursive merge has sentences with more than two words.
      If we have that converse statement, and then a language from the Amazon comes along with only two word sentences, then we can falsify the universal claim.

      The fact that a language *without* recursive merge will only have maximum two (or three) word sentences is irrelevant.

      And it seems hard to show the required converse, since there are presumably some feature systems that control the derivations and spell-out etc etc. So for example to take a naive phrase structure view (sorry) there are CFGs that only generate two words sentences.

      Delete
    3. Reply to David:
      [This reply will be in several parts]

      It never occurred to me that you might turn answering my question into a demonstration of your double standards. But since you went for it I have to hand it to you: this was a resounding success. In the same threat in which you accuse me of frankenquoting [most entertaining bashing I have ever taken – I shall frame it for my grandkids to admire] you refuse to comment on Chomsky’s gross distortions of Boden [leaving the audience to believe you think it was acceptable that he attributed utter stupidity to her]. So apparently, what he did in a PUBLISHED paper was okay, what I did in an informal blog with a tight word limit was not merely wrong but ‘a collection of frankenquotes’?

      Now a few comments on your own inaccuracies. You write:

      "P2 is your own construction, a collection of frankenquotes from random parts of Hauser, Chomsky & Fitch's various papers. However, to briefly reply to its disparate bits:"

      Sorry, I have to correct you, but P2 is based entirely on quotes from ONE paper: Fitch et al., 2005. This paper was, at least partly, an attempt to undo damage caused by Everett’s claims about Piraha. It was also a reply to Pinker&Jackendoff [2005] and what you call frankenquotes has been cited by Jackendoff and Pinker in their reply to Fitch et al. as well – are these two as grossly incompetent as you seem to imply I am?

      Next you attempt to ridicule me:

      “The proposal that a particular language L might have Chomsky's rule of Merge without its recursive step is eminently falsifiable. Sentences in such a language would have maximally two words. See our paper, footnote 11, where this is discussed. Chomsky's Merge is binary, combining two elements at a time. If your favorite syntactic theory allows more than two elements to combine to form a syntactic constituent, then the proposal that this rule lacks the recursive step in some language L would not limit sentences to length 2, but the proposal would still be falsifiable insofar as more or less every test for constituency from any standard syntax textbook should fail in L.”

      This ‘rebuttal’ has been called ‘silly and dishonest’ by a highly accomplished linguist: Geoff Pullum. I quote the relevant passage here:

      “If Merge involves putting two expressions together to make a larger expression, then barring it from affecting its own outputs would mean that it would have to apply solely to words. It could put two words together to make a two-word phrase, but that would be the limit. Any occurrence of a three-word sentence would refute this restriction and show that Merge must be able to affect its own outputs. What is grossly dishonest is to represent Everett or me or anyone else as unable to understand this. Of course we agree that languages have phrases more than two words long. Nobody is denying that, and this discussion would never have started if anyone had hinted that the issue on the table was the existence of 3-word phrases. And what is silly is to represent (in effect) the discovery of 3-word phrases as an important result of modern linguistics”. [Pullum, 2012, and since it’s a partial quote here is also a link: http://chronicle.com/blogs/linguafranca/2012/03/28/poisonous-dispute/

      To be continued

      Delete
    4. Reply to David, part 2:

      Your next passage again distorts what I wrote and distracts from what is at issue:

      “The proposal that the language faculty as a whole makes options available that not all languages use is a truism of linguistics (it's what we spend much of our time figuring out, in fact), and among your omitted bits is a set of examples provided by Fitch, Hauser and Chomsky to exemplify just this: for instance, the existence of languages with three-vowel systems even though languages are fully capable of using five-vowel systems etc. This truism could be falsified by showing that languages claimed to be different from, say, English, actually are not. So, for example, contrary to what I have always assumed to be true, I actually do speak and understand Navajo, and Hawaiian actually has as many vowels as English”.

      First you talk about a truism, when in fact to date no one has provided any evidence from biology for the Chomskyan LF: “The proposal that the language faculty as a whole makes options available that not all languages use is a truism of linguistics” – we do not KNOW what ‘the language faculty as a whole’ is, far less which options it may make available. Once you have provided concrete biological evidence for the LF you can maybe talk about truisms – so far we have at best hypotheses.

      Next, I had put CORE in capitals for a reason, because that is what is at issue. HCF [2002] claimed recursion was the CORE property of FLN, 3 years later it was only ONE tool among MANY. So the distractions about 3 or 5 vowel systems are just that: distractions – no one had claimed having 5 vowel systems is a CORE property of human language. Again this is in essence the same as Jackendoff and Pinker wrote:

      "Moreover, FHC equivocate on what the hypothesis actually consists of. They write:
      The only “claims” we make regarding FLN are that 1) in order to avoid confusion, it is important to distinguish it from FLB, and 2) comparative data are necessary, for obvious logical reasons, to decide upon its contents.
      But they immediately make a third claim regarding FLN, namely the recursion-only hypothesis (reproduced from the original article). They then add: “To be precise, we suggest that a significant piece of the linguistic machinery entails recursive operations.” which actually substitutes a weaker claim: “recursion only” becomes “recursion as a significant piece.” This is soon replaced by a still weaker version, namely, “We hypothesize that ‘at a minimum, then, FLN includes the capacity of recursion’.” Thus in the course of a single paragraph, recursion is said to be the only component of FLN, a significant component of FLN, and merely one component of FLN among others". (J&P, 2005, p. 217)

      to be further continued

      Delete
    5. Alex, I misspoke. What I should have said is "A proposal that every language has Recursive Merge would be falsified by finding a language with non-recursive Merge. Here's what such a language would look like ..." (And by the way, Hauser etc. did not actually claim that every language should have recursive Merge.)

      Christina, I've had enough. I thought it was vaguely useful to not let inaccurate factual claims stand as the last word in one of these discussions, but obviously that's a lost cause as long as you participate in this blog. So I give up.

      I will, however note that what Pullum called "silly and dishonest" is in fact just plain true. A set with two members has ... two members. Do you really want to argue about that? Bye.

      Delete
    6. "Christina, I've had enough"

      Oh that's alright. I just finished reading Levinson [2013] and can understand that you have more important things to do - like making sure that you [pl] do not let so many inaccurate factual claims stand as the last word in Legate at al. [2013].

      Delete
    7. "Christina, I've had enough. I thought it was vaguely useful to not let inaccurate factual claims stand as the last word in one of these discussions, but obviously that's a lost cause as long as you participate in this blog. So I give up."

      You're an intellectually dishonest asshole. You're also too stupid to reason correctly about falsification.

      Delete
  8. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. Reply to David, part 3:

      Lets have a look at your final paragraph:

      Finally, the idea that FLN could be empty, discussed briefly by Fitch, Hauser & Chomsky, is not a sign that the proposal is unfalsifiable, but actually states one way in which their proposal could be shown false. They go on to tell their readers what conclusion they would draw in such a circumstance, which is why they brought the matter up in the first place.

      In my original post I was admitting that I could have misunderstood Fitch et al.’s proposal and was asking whether it IS falsifiable. You claim: “idea that FLN could be empty, … actually states one way in which their proposal could be shown false”. But when I look at their text I see:

      “The contents of FLN are to be empirically determined, and could possibly be empty, if empirical findings showed that none of the mechanisms involved are uniquely human or unique to language, and that only the way they are integrated is specific to human language. The distinction itself is intended as a terminological aid to interdisciplinary discussion and rapprochement, and obviously does not constitute a testable hypothesis” (Fitch et al., 2005, p. 181).

      So it seems here the authors assert we are NOT looking at a testable hypothesis. If you disagree please tell us why they wrote what they did. I also would like to draw your attention specifically to: “if empirical findings showed that none of the mechanisms involved are uniquely human or unique to language, and that only the way they are integrated is specific to human language”. This is of course what Deacon or Tomasello [and many others] have proposed long time ago: the way cognitive mechanisms are integrated is specific to language but the mechanisms are not. If this is an acceptable conclusion for Fitch et al. then please explain to me what in such a case can account for the features of language acquisition you claimed earlier someone like Tomasello cannot.

      Delete
  9. This is stupid and disingenuous. You liken your theory to successful theories that are widely accepted due to massive evidence and their accurate predictive value ... which begs the question [used correctly]. But the majority of "scientific proposals" have gone by the wayside because they have been shown to be false (and thus were falsifiable). By rejecting falsification for your proposal you simply assume it to be correct without warrant.

    ReplyDelete