Comments

Tuesday, September 15, 2015

Aspects at 50

In case you do not know, the collection of essays that Angel Gallego and Dennis Ott put together to celebrate the publication of Aspects 50 years ago is now out and available both from MITWPL and online here. I'd be interested in people's reactions to these. For what it is worth, I think that the papers reflect (surprise surprise) two different conceptions (or at least emphases) in the field. Oddly, (and need I add that this is my opinion) the Chomsky vision expressed so eloquently and provocatively in Chapter 1 seems absent (at least in any explicit form) from many of the papers in the volume. Aspects is one of the most eloquent expressions of the cognitive (and, hence the biological) conception of linguistics. And if it is true (which it is) that God is in the details, then it is equally true that details that fail to bear on the central cognitive questions (descriptive and explanatory adequacy) are not details of obvious relevance to the Aspects program. In other words, we should always be asking ourselves what a particular analysis contributes to our understanding of FL. We may not always know, but we should always be asking.

I hope to write a little more about this in coming posts. But please feel free to let say how this volume strikes you in the comments section and thanks to Angel and Dennis for putting in all that work. It is a good snapshot of where the filed currently is, I believe.

4 comments:

  1. [Part 1 of 2]

    This isn't about the volume as a whole, or even about the kind of concerns that Norbert is hinting at in his point, but I couldn't resist mentioning a couple of small points about this paper on weak and strong generative capacity. It illustrates rather perfectly a couple of pet peeves that often come up and consistently bother me (and I'm sure certain others too).

    First, the paper starts by quoting Chomsky's comment that "discussion of weak generative capacity marks only a very early and primitive stage of the study of generative grammar", and then calls this "a clear and explicit statement of Chomsky's perception of the linguistic relevance of formal and mathematical investigations of grammars and languages". What I think this gets wrong is that it equates formal and mathematical investigations with the study of weak generative capacity. While at a certain point in time it might have been true that these two areas overlapped with each other almost entirely, that's no longer the case. We'll all be better off, I think, if we dissociate formal and mathematical methods of investigation from specific topics of investigation, such as weak generative capacity.

    Second, I agree with the Chomsky quote that considerations weak generative capacity are only very "primitive" and coarse-grained tools, but this paper seems to go further and suggest that they are entirely irrelevant.[1] I think this is missing the forest for the trees. We really do learn something when we discover that the set of strings we want our "grammar of English" (usual idealizations) to generate is not generable by any finite-state machine. Granted, our "grammar of English" is a much more complicated beast than just a generator of strings, but one of the things it does is generate strings, and what we learn is that it can't do that in the way that finite-state machines do it.

    [1] It also seems to connect this supposed irrelevance with the empirical discovery that natural languages don't correspond perfectly in WGC to one of the original few kinds of grammars that Chomsky put on his famous hierarchy. This just means that they correspond to some other point that didn't get a name to start with, such as perhaps the mildly context-sensitive point.

    ReplyDelete
  2. [Part 2 of 2]

    As a possible analogy --- I don't think it lines up perfectly in all the details, but at least to me it has the same ring to it --- consider the "reverse" situation, where we learn something by restricting attention to the meaning side rather than the sound/string side. One of the many interesting things about the word 'most' is that it tells us that we need something more powerful than first-order logic to capture natural language meanings, because first-order logic can't do the kind of counting comparison that is required to express the truth condition (more standard caveats and idealizations) of 'Most dogs bark'. (By contrast, first-order logic does just fine with 'All dogs bark', 'Some dogs bark', and even 'Five dogs bark'.) Now of course our "grammar of English" is a much more complicated beast than just a generator of meanings or logical formulas, but one of the things that it does is generate meanings or logical formulas, and what we learn is that it can't do that in the way that first-order logic does it.

    The point about 'most', on the meaning side, is I think standardly accepted to be telling us something interesting. So I can't see why the point about nested dependencies on the string side is considered more peripheral.

    Personally, I have a hunch that it's something to do with the way it's common to think of the study of structures (i.e. syntax) as quite closely driven by facts about good and bad configurations of words on a page (i.e. the PF interface), and slightly less directly driven by facts about what those configurations mean (i.e. the LF interface). This has a tendency to let the study of meanings (LF interpretations) take on a life of its own independent of the structures, without letting the study of word sequences (PF interpretations) do the same. I'm not saying that good syntactic practice actually proceeds this way --- good syntactic practice treats both sides as sources of constraints on an equal footing, I would think --- but some of the ways in which we casually talk about "what syntax is" can sometimes feed this illusion, I think.

    ReplyDelete
    Replies
    1. I agree for the most part. I think there's some additional historical and sociological reasons why weak generative capacity is completely dismissed by most linguists, but there's little use in discussing that. Rather, I'd like to say that you're still being way too nice to the Fukui paper (I guess deliberately to focus on the more systemic issues). Unfortunately I lack the willpower to be quite as gentleman-like.

      1) The claim that structure (more formally, tree languages) plays no role in computational linguistics is completely unfounded. Quite the opposite is the case since n-gram models have reached their limit and tree transducers are becoming a more common sight in the field. The need to deal with semantics also naturally gives rise to structural considerations, and Dependency Grammar is all the rage in a variety of research areas.

      2) The remark about finiteness in language experiments completely misses the mark because whether an object is (in)finite is not the decisive factor for whether one should analyze it as an (in)finite object. That has been worked out very carefully by Savitch, but the basic argument is already in Syntactic Structures. We could model human languages as finite-state devices, but we don't because that would be a horrible model that does not express the generalizations the right way. The same argument provides you a reason for treating the languages in the experiments as infinite.

      3) Page 126 has yet another claim that discussion has focused on string dependencies rather than more articulate structures. It's not quite clear which discussion is being referred to (animal communication?), but in general that's just not true.

      4) The brief remarks about copying reference Stabler04 in a footnote but not Kobele06, which has a lot more to say on this topic. It is arguably the primary reference on copy movement.

      5) The question why Japanese shows certain restrictions on dependencies is ill-posed. It is trivial to write a grammar that obeys these restrictions. What would be problematic is if such restrictions can be found in every language as the formalism would then overgenerate on a typological level. Fukui is implicitly suggesting that we do not find free variation across languages, but I'd like to see some actual evidence for that (disclaimer: I expect things to get very hairy once you consider languages with free word order or massive scrambling).

      6) Fukui asserts on p127 that an explanation for the restrictions on nested and crossing dependencies are "difficult to obtain if we only look at terminal strings". How so? This is an argument by incredulity, aka not an argument at all. For instance, the notion of well-nestedness is completely string-based but rules out various kinds of crossing dependencies.

      Delete
    2. 7) The claim that Merge is structure-oriented is also dubious. That is the standard view, but there are equally valid alternatives (e.g. the chain-based definition of MGs in StablerKeenan03) that seem to be perfectly adequate for what is discussed in the paper.

      8) Fukui formulates the hypothesis that "dependencies are possible in human language only when they are Merge-generable" and then asserts that this "is a generalization that cannot be made when the rule system of human language is divided into phrase structure rules and grammatical transformations".

      The first statement does not have the restrictive effect Fukui intends. At the very least, Merge has to be capable of regulating subcategorization requirements, and once you have that you have the full class of MSO-definable constraints over derivation trees, which can do pretty much all the things Fukui wants to rule out.

      The second statement is also false since MGs can be decomposed into a phrase structure grammar for generating derivation trees and a transformation that turns derivation trees into transformations. One might not consider MGs an adequate model of Minimalism (though I have never heard a convincing argument in that direction), but this is not at stake here. Fukui makes a broad claim that such generalizations are impossible in a system with rewrite rules and transformations. They are not.

      9) On p130, Fukui notes "it is premature to tackle such a question until we come up with a reasonable mathematical story of the Merge-based generative system". Implicature duly noted.

      I share Fukui's sentiment in the conclusion that we still need to learn a lot about strong generative capacity, but the paper completely fails to construct a compelling argument.

      Delete