Saturday, May 25, 2013

Formalization and Falsification in Generative Grammar

There's been a very vigorous discussion in the comment sections to this post which expatiates on the falsifiability of proposals within generative grammar of the Chomskyan variety.  It interweaves with a second point: the value of formalization.  The protagonists are Alex Clark and David Pesetsky. It should come as no surprise to readers that I agree with David here. However, the interchange is worth reading and I recommend it to your attention precisely because the argument is a "canonical" one in the sense that they represent two views that about the generative enterprise that we will surely hear again (though I hope that very soon David's views prevail, as they have with me).

Though David has said what I would have said (though much better) let me add three points.

First, nobody can be against formalization. There is nothing wrong with it (though there is nothing inherently right about it either).  However, in my experience its value lies not in making otherwise vague theories testable. Indeed, as we generally evaluate a particular formalization in terms of whether it respects the theoretical and empirical generalizations of the account it is formalizing, it is hard to see how formalization per se can be the feature that makes an account empirically evaluable.  This comes out very clearly, for example, in the recent formalization of minimalism by Collins and Stabler. At virtually every point they assure the reader that a central feature of the minimalist program is being coded in such and such a way. And, in doing this, they make formal decisions with serious empirical and theoretical consequences and that could call into question the utility of the particular formalization. For example, the system does not tolerate sidewards movement and whether this formalization is empirically and theoretically useful may rest on whether UG allows sidewards movement or not. But the theoretical and empirical adequacy of sidewards movement and formalizations that encode it or not is not a question that any given proposed formalization addresses nor can address (as Collins and Stabler know). So, whatever, the utility of formalization, to date with some exceptions (I will return to two), it does not primarily lie in making theories that would otherwise be untestable, testable.

So what is the utility? I think that when done well, formalization allows us to clarify the import of our basic concepts. It can lay bear what the conceptual dependencies between our basic concepts are. Tim Hunter's thesis is a good example of this, I think.  His formalization of some basic minimalist concepts allows us to reconceptualize them and consequently extend them empirically (at least in principle).  So too with Alex Drummond's formal work on sidewards movement and Merge over Move. I mention these two because this work was done here at UMD and I was exposed to the thinking as it developed. It is not my intention to suggest that there is not other equally good work out there.

Second, any theory comes to be tested only with the help of very many ancillary hypotheses.  I confess to feeling that lots of critics of Generative Grammar would benefit by reading the work criticizing naive falsificationsim (Lakatos, Cartwright, Hacking, and a favorite of mine, Laymon). As David emphasizes, and I could not agree more, it is not that hard to find problems with virtually every proposal. Given this, the trick is to evaluate proposals despite their evident shortcomings.  The true/false dichotomy might be a useful idealization within formal theory, but it badly distorts actual scientific practice where the aim is to find better theories.  We start from the reasonable assumption that our best theories are nonetheless probably false. We all agree that the problems are hard and that we don't know as much as we would like. Active research consists in trying to find ways of evaluating these acknowledged false accounts so that we can develop better ones. And where the improving ideas will come from is often quite unclear.  Let me give a couple of example of how vague the most progressive ideas can be.

Consider the germ theory of disease. What is it? It entered as roughly the claim that some germs cause some diseases sometimes. Not one of those strongly refutable claims. Important. You bet. It started people thinking in entirely new ways and we are all the beneficiaries of this.

The Atomic Hypothesis is in the same ball park. Big things are made up of smaller things. This was an incredibly important idea (Feynman, I think, thought this was the most important scientific idea ever).  Progress comes from many sources, formalization being but one. And even pretty labile theories can be tested, as, e.g. the germ theory was.

Third: Alex Clark suggests that only formal theories can address learnability concerns. I disagree. One can provide decent evidence that something is not learnable without this (think of Crain's stuff or the conceptual arguments against the learnability of island conditions). This is not to dispute that formal accounts can and have helped illuminate important matters (I am thinking of Yang's stuff in particular, but a lot of stuff done by Berwick and his students are, IMO, terrific). However, I confess that I would be very suspicious of formal learnability results that "proved" that Binding Theory was learnable, or that Movement locality theory (aka Subjacency) was or that ECP or structure dependence was. The reasons for taking these phenomena as indictions of deep grammatical structural principles is so convincing (to me) that they currently form boundary conditions on admissible formal results.

As I said, the discussion is worth reading. I suspect that minds will not be changed, but this does not make going through this (at least once anyhow) worthwhile.

3 comments:

  1. I don't think I made my point about universals very clearly. And I will frame this using naive Popperian talk just to avoid long digressions, though like you I think it is simplistic.

    The grammars are unobserved and we are trying to figure out what they are. So if I say the grammar for English is G1 then this makes some predictions: G1 might generate "which pictures of himself did Mary like?" and since this is not grammatical you can say, ok G1 fails this empirical test.
    So on the basis of data about grammaticality, ambiguity, the range of acceptable interpretations and so on, one could rule out G1 and G17 and so on.
    So there are obvious issues about how to choose between G1 and G2 if both define exactly the same sets of sound meaning pairs, and one can argue about how much precision you should use to specify G1 and G2; with me arguing that you should define a generative grammar and David P saying that informal specifications are ok for his purposes.

    When we come to universal statements like "All grammars have property P" then things become a little less clear. If we could look inside people's heads and see what the real grammars are, then we could just check that the grammars are in fact P. But we can't do that.



    So this is tricky, because it is a universal statement about something *unobserved*.
    And as you said in your post on Evans and Levinson, these statements are hard to refute.
    If there is a language L which might be problematic it is not enough for a sceptic to say, here is a possible grammar G1 for the language L which is not P. Because there may be another grammar G2 for L which *is* P.
    So the sceptic has to show that there is *no grammar* for L which has property P.
    And that is very very hard if P is not mathematically precise.

    The problem here is that we do not know exactly what the grammars are. The grammars are theoretical constructs, and we don't know that they are correct. As you say: "We start from the reasonable assumption that our best theories are nonetheless probably false".

    (so maybe P might be something like "has hierarchical structure". The sceptic might claim e.g. that Piraha is a counterexample; and show a non-hierarchical grammar for Piraha. But you can say, no this is not a refutation because here is a grammar which *is* hierarchical for Piraha).

    If P is precisely specified then it is possible. And the Shieber/Kobele arguments seem to work -- in the Shieber case P is "being weakly or strongly equivalent to a CFG". One can show that there is *no* context free grammar which can generate Swiss German. So these sorts of mathematical statements have some empirical content.

    But if P is not precise, like say "having structurally sensitive movement rules" or "having hierarchical structure" or "having DPs", then these statements may simply have no empirical reflex on the sorts of data that linguists have access to, even if they may be true statements about what is ultimately in the head.

    It seems to me that there is no way you can disprove a universal claim about grammars without proving something mathematical, because of this problem of universal quantification over grammars. And that critically requires mathematical precision in the specification of P. And without that precision, the statement is vacuous.

    More generally claims about UG are further away from the data than claims about Gs and thus need more precision than claims about Gs.



    ReplyDelete
  2. I'm on vacation and traveling, but I will try and write something soon. The discussion has been vigorous, mannerly and enlightening. I hope to find time to join it soon.

    ReplyDelete
  3. I suggest that one way to get away from the problems raised by Alex might be to think of UG as a 'betting guide': given that we see certain things in a language, how much should we be willing to bet that certain more stuff is (not) there also?

    So for example if we have extractions like 'Who do you think Mary likes', 'Who did Bill say Mary likes', it seems a pretty good bet (at least a day's pay) that we also have 'Who do you think Bill said Mary likes' and 'Who did you say Bill thinks Mary likes' (with a bit of degradation due to the extra complexity, but not clear OK/not OK divide), and, if the language marks the escape hatches in the manner that Irish does, then that the complex examples will have two escape hatch marks.

    The basic problem that this approach attempts to sidestep is the extreme diversity in possible ways of formulating grammars, which makes it hard to identify any properties that they are all supposed to have in a concrete way that will satisfy all of the people who are bright and knowledgeable enough to be worth trying to satisfy. Focusing attention on expectations about the viability of specific form-meaning relations based on what is already known about a language, rather than on the existence or not of various abstract mechanisms might be useful.

    For the betting guide idea to work in a rigorous way, we will need to have solid ideas about how corpora determine grammars, but that seems relatively accessible now, at least compared to how things were 30 years ago.

    ReplyDelete