Comments

Monday, March 17, 2014

Hoisted from the comments

Dennis O linked to this paper by Sascha Felix. It provides a thumbnail history of GG, the relation between descriptive and explanatory motives for language study and the effects that MP had on this.  His description strikes me as more or less correct. I have one quibble: As I've noted before, MP is not really an alternative to GB and the problem of explanatory adequacy does not go away because we are interested in going beyond explanatory adequacy. The way I think we should look at things is that GB is the "effective theory" and MP aims to provide a more fundamental account of its generalizations. This said, I think Felix has largely identified the cleavage in syntax today and he rightly has noted that what made the field exciting for many occupies a very very small part of what constitutes current research. I have been won't to describe this as saying that Greenberg won. At its best, the current enterprise aims to find the patterns and describe them. The larger vision, the one that makes linguistics interesting for Chomsky, Felix and me is largely a marginal activity. As I noted in an earlier post, I believe that this will have baleful effects for the discipline as a whole unless we witness the resurgence of an interest in philology by the culture as a whole. Don't bet on it.

8 comments:

  1. You think that GG (which, as you use it, is what I would call GB/MP) should have more Plato/Darwin-like considerations at work. I have claimed that this is nonsense until we have some halfway reasonable linking hypothesis. From your comments, I reconstruct that you either believe (i) that (y)our intuitions about how learning/evolution might work are good enough, or (ii) that more general simplicity considerations should guide us until something more concrete is worked out. If the latter, then doesn't your lament (at least functionally) boil down to ruing the (perceived) fact that people aren't worrying about simplicity as much as you think they should? This could be the basis of a profitable discussion.

    I think there is a third pole that is being ignored here; that of (say) Montague. While the Greenbergian pole is that of loose but broad description, and the Felix/Chomsky/Hornstein pole that of simplification, the Montague pole is that of tight but narrow description - grammar fragments. My lament is that this third pole is being (and has been since ... REST?) completely ignored in GG. Not only do grammar fragments provide a sanity check on Greenbergian proposals (do they actually work?), they also provide a precise characterization of the mechanisms that the F/C/H pole seeks to give a simpler statement of.

    Since you say you believe that GB is more or less right, I can see why you are frustrated that so many people are languists -- they are proceeding on the basis that GB is not obviously more or less right, and that we need to do more work to figure out what these right generalizations are. On the other hand, it seem that you should welcome Montagovians with open arms -- how else are you going to know what the consequences of your proposed modifications of theoretical ideas are?

    ReplyDelete
    Replies
    1. Ahh, once more into the breach! Grammar fragments? If you insist. I have nothing against these, though I do not think that Montague's vision added much to the discussion (well, maybe the idea that syntax and semantics are done in tandem). At any rate, I don't believe PP and DP are nonsense, or it they are they they are very useful nonsense. Colin in some earlier comments on the previous post has accurately described what's going on at UMD and how we have tried (and IMO succeeded) in marrying theoretical work with other methods. POS considerations REALLY DO mesh well with these other approaches. So, in my view, this both can be and is being done. So if nonsense, very useful and productive nonsense.

      As for Darwin's problem, this is a more abstract issue. However, I believe that here too there has been good work done. I've argued before that part and parcel of this should be an interest in unification of the modules of GB, a skepticism concerning parameter models, and interest in examining Bare Output Conditions in the examination of language use. I can see that these don't impress you. Fine. Maybe others will find it congenial. However, I think that these ideas do have legs and have had results and so my view is quite a bit less jaundiced than yours.

      Last points and an invitation: you are right that a good chunk of my frustration can be summed up in two points: skepticism about the status of GB and a disdain for simplicity. So let me say a word or two about each.

      GB, IMO, has proven to be pretty accurate so far as it goes. The modifications to it have not, IMO, overturned the basic descriptive findings. There are indeed ECP effects, binding effects, island effects, case effects, theta effects etc. They are well described *to a first degree of approximation* in GB terms. These effects need deeper explanation and that's something MP can aim to provide. This is a FIRST step in an MP project, not the last. The next is to try and factor out what is specifically linguistic from the unified descriptions. Those who have indulged in POS reasoning know how this could be done. After all our job has been to try and factor out the Universal form the language particular. Kick this up a notch and we have a recipe for MP style arguments. The novelty, as Felix notes, is that we need to go beyond purely ling/distributional data. Ok. How do we do this? Well there are models out there that I've tried to highlight in the blog. This may not be to everyone's taste, some might even consider it nonsense, but nothing I've heard convinces me that this is not a coherent enterprise, and well worth pursuing.

      Simplicity: As I argued (or tried to) in the last post, simplicity considerations can be important here. Unification is partly driven by this (again consult On Wh Movement, this is the holy text). However the trick is to given 'simplicity' some content in the local research setting. That setting includes the results of GB and something like Darwin's problem. I've discussed this far too much in other places (chapter 1 of 'Move!' and 'A theory of syntax' so will refrain from doing it again here.

      Let me end by asking you a question: what insights has building fragments 0f particular Gs given us lately? You are a partisan, so maybe an illustration would be useful. If you need more space to illustrate, I would be happy to have you offer a guest post on the virtues of fragment building. Let's see the proposal in action.

      Delete
  2. Just to add to the confessional tone of Felix's piece, here's some Norbert bio. I did not get into linguistics by way of languages, but by way of philosophy. My undergrad major was in philo and my PhD was as well (Harvard, not even MIT). But when I got into the field, this would not have been considered an odd route into the discipline (Higginbotham and Pietroski are two others who made it into ling this way). What captured my attention, thanks largely to endless discussions with Elan Dresher (thx!) was the light that the study of language shed on the empiricism/rationalism debate. These two very different ways of studying minds and doing science were center stage when I was an undergrad and Chomsky's debates with Quine, Putnam, Kripke, Lewis, Searle, Goodman, Dummett, etc. packed the philo journals. It was intellectually exciting and gripping stuff. To my young eyes what Chomsky did was show how the very rarefied debates between two ancient approaches to "natural philosophy" could be debated empirically. Boy was that exciting, and fun.

    To my recollection, these issues were front and center for most of my undergrad and grad career. Go back and take a look at the vast literature, for example, on the "indeterminacy of translation" or the papers on the "innateness controversy" or an Chomsky's rationalism. The main journals were packed with this stuff. This was even true within linguistics. Proof of this was the virtual requirement that any thesis begin with a discussion of (lip service to?) the POS. Things began to change in the mid to late 80s. Chomsky's stuff was still read by philosophers, but he was no longer a mainstay of the philo literature. Some began to describe him as a "philosophical naif." In linguistics, the large issues began to recede with the rise of comparative grammar, I believe. If pressed, I would say that the influx of Europeans into GG had a big impact, given that they came to GG from a strong philological tradition, where language analysis (and its concomitant tools) was a lively university industry (unlike in North America). Felix describes this well in his essay. Interestingly, this was NOT all that typical of the North American scene when I was younger, maybe because there were few ling depts or philology depts and so many students came into the field from philo, math, or some other area that was not language based.

    At any rate, things have changed. We see this in our applicant pool for grad school. Many students come with the kinds of interests Felix describes and the field has adopted what appears to be near exclusive respect for a singly style of research. To repeat, this is/can be very good and useful work. The unfortunate things is that is seems to have mostly pushed out anything else.

    ReplyDelete
  3. One more observation about Felix's piece. I think that the challenge that MP provided for traditional methods of linguistic investigation is spot on. MP's real challenge is to our methods of investigation and the requirement that we start thinking about more than language data to pursue the program. In this regard, MP is more of a challenge than GB and the REST were as it challenges us to find novel methods to pursue the enterprise. IMO, this is all for the good, and we are stating to see models about how to do this. I've mentioned some in the blog before and will continue to flag these. However, fell free to jump in and suggest others that you think fecund.

    ReplyDelete
  4. One thing that seems to me to be missing from the discussion here and in the previous thread is the role of typology, which I think has a great deal of capacity to make up for the lowered utility of PoS arguments (which are weakened due to the fact that learning is more powerful than we used to think it was, and that the primary linguistic data is large: by the time people are 6, they have been exposed to 10s of millions of words of data, and by the time they reasonably far into university, probably more than 100). It seems to be the case, for example, that the structure dependence of subject-aux inversion is learnable in principle, but in fact, it is surely not learned, because structure independent operations of this kind appear to be typologically unattested (an interesting contrast with 2nd position clitic placement, an `insertion' rather than 'extraction' rule, where a limited amount of structure independence is often found). Likewise, I think I can imagine some ways in which a person might learn that extraction is impossible from finite why-clauses, but if this is universal (as David Pesetsky suggested a few months ago), then the default assumption should be that it is not learned, but rather that some fairly strong universal factor prevents or at least inhibits it.

    The role of typology provides, it seems to me, a clear place for people with the peculiar combination of philological interests and modest but not pathetic mathematical talents that many people attracted to linguistics seem to have (comparable to the role of experimentalists in physics), without having to be in some kind of perceived conflict with the more theoretical ones (I think there's a certain amount of mutual disparagement been theoreticians and experimentalists in physics, in fact, but better natured than what seems to go on in linguistics).

    ReplyDelete
  5. @Avery. I think that the death of POS arguments is exaggerated ... though your remarks highlight why they are on life support currently. You say that "learning is more powerful than we used to think" and highlight that children encounter millions of words. I can see that this has convinced many that linguists don't need to worry about POS arguments, but I think that the evidence is in fact the opposite. Just a couple of quick examples:
    -- The fact that state-of-the-art NLP/ML can learn to parse 86% of a corpus (or whatever the current number is) has no bearing on learning the hard stuff; we've known forever that much of the interesting/hard stuff is rare-to-nonexistent in naturally occurring text/speech.
    -- Demonstrations of distributional learning in kids have received much attention, but they tend to deal with such simple cases that they have little bearing on the cases that motivate POS arguments
    -- 10 million words is not much help if it fails to represent the hard stuff, or mis-represents it. Noise is still noise, even if you hear a lot of it. And there seem to be many cases where careful analysis of input corpora suggests that the evidence for harder stuff is absent or misleading.

    e.g., #1: Pearl & Sprouse (2013, Appendix B) give figures on a parsed corpus of child-directed speech, amounting to approx 10% of what a learner would encounter in a 3-year period. It's not encouraging.
    e.g., #2: Zukowski & Larsen (2011) did something similar for wanna-contraction. The input is misleading.
    e.g., #3: My impression of attempts to search child directed speech for evidence on scope, subtle binding effects, etc. is that they're similarly unrevealing
    e.g., #4: if we move to a supposedly in-your-face-easy case like figuring out the sound categories of a language, the initial enthusiasm for pure distributional learning is similarly overstated. Apparent successes in learning categories purely from distributions turned out to depend on major simplifications of the problem. Newer distributional approaches are doing better, but by incorporating key help from phonological or lexical knowledge.

    I may be misunderstanding the remarks above, but I find it striking that Greg regards learning considerations to be so hard as to be nonsense, whereas Avery regards them as too easy. My take-away from this is that linguists have not been paying attention to the problem. My impression had been that linguists continue to pay lip-service to such arguments as a motivation for the field, but perhaps even that has faded.

    ReplyDelete
    Replies
    1. My tune would be that there is a huge gap between 'more powerful than most generative linguists thought it was in 1988', and 'as powerful as Michael Ramscar and Dan Everett think it is now'. Forex, I think Ramscar et al have successfully refuted the Morphological Blocking Principle as proposed by me in my 1990 NLLT paper, but that doesn't not mean that all the details of Icelandic case-marking and agreement can be learned without some moderately strong UG.

      At the moment, it seems to me that substantial and rubbish cases of PoS arguments are all jumbled up without much sorting having been done, but Pearl, Sprouse, yourself and many others are clearly getting onto it.

      Delete
    2. 'doesn't' <- 'doesn't not'. The 1990 paper was about synthetic vs analytic verb inflection in Modern Irish, basically an LFG reanalysis of some aspects of McCloskey & Hale 1984, also NLLT.

      Delete