Comments

Friday, January 18, 2013

Effects, Phenomena and Unification


In the previous post, I mentioned that there is a general consensus that UG has roughly the features described in GB. In the comments, Alex, quotes Cederic Boeckx as follows and asks if Cederic is “a climate change denier.”

I think that minimalist guidelines suggest an architecture of grammar that is more plausible biologically speaking that a fully specified, highly specific UG – especially considering the very little time nature had to evolve this remarkable ability that defines our species. If syntax is at the heart of what had to evolve de novo, syntactic parameters would have to have been part of this very late evolutionary addition. Although I confess that our intuitions pertaining to what could have evolved very rapidly are not as robust as one would like, I think that Darwin’s Problem (the logical problem of language evolution) becomes very hard to approach if a GB-style architecture is assumed.

The answer is no, he is not (but thanks for asking). I’ll explain why but this will involve rehearsing material I’ve touched upon elsewhere so if you feel you already know the answer please feel free to go off and do something more worthwhile.

My friends in physics (remember, I am a card carrying hyper-envier) make a distinction between effective and fundamental theories.  Effective theories are those that are phenomenologically pretty accurate. They are also the explananda for fundamental theories.  Using this terminology, GB is an effective theory, and minimalism aspires to develop a fundamental theory to explain GB “phenomena.” Now, ‘phenomena’ is a technical term and I am using it in the sense articulated in Bogen and Woodward (here). Phenomena are well-grounded significant generalizations that form the real data for theoretical explanation. Phenomena are often also referred to as ‘effects.’ Examples in physics include the Gas Laws, the Bernoulli effect, black body radiation, Doppler effects, the photoelectric effect etc.  In linguistics these include island effects, principle A, B and C effects, weak and strong crossover effects, the PRO theorem, Superiority effects etc. GB theory can be seen as a fairly elaborate compendium of these. Thus, the various modules within GB elaborate a series of well-massaged generalizations that are largely accurate phenomenological descriptions of UG. I have at times termed these ‘Laws of Grammar,’ (said plangently you can sound serious, grown-up and self-important) to suggest that those with minimalist aspirations should take these as targets of explanation.  Thus, in the requisite sense, GB (and its cousins described in the last post) can serve as an effective theory, one whose generalizations a minimalist account, a fundamental theory, should aim to explain. 

I hope it is clear how this all relates to the Cedric quote above, but if not here’s the relevance.  Cedric rightly observes that if one is interested in evolutionary accounts then GB cannot be the fundamental theory of linguistic competence.  It’s jus appears as too complex, all that internal modularity (case and theta and control and movement and phrase structure), all those different kinds of locality conditions (binding domains and subjacency/phase and minimality and phrasal domains of a head and government) all those different primitives (case assigners, case receivers, theta markers, arguments, anaphors, bound pronouns, r-expressions, antecedents etc., etc., etc.).  Add to this that this thing popped out in such a short time and there really seems no hope for a semi-reasonable (even just-so) story.  So, GB cannot be fundamental.  BTW, I am pretty sure that I have interpreted Cedric correctly here for we have discussed this a lot over the last five to ten years on a pretty regular basis.

Given the distinction of GB as effective theory and MP as aiming to develop a fundamental theory, how should a thoroughly modern minimalist proceed? Well, as I mentioned before (here) one model is Chomsky’s unification of Ross’s islands via subjacency.  What Chomsky did was (i) treat Ross’s descriptions as effective and (ii) propose how to derive these on more empirically, theoretically and computationally more natural grounds. Go back and carefully read ‘On Wh-Movement’ and you’ll see that how these various strands combine in his (to my taste buds) rather beautiful account. Taking this as a model, a minimalist theory should aspire to the same kind of unification. However, this time it will be a lot harder. For two main reasons.

First, what MP aspires to unify have been thought to be fundamentally different from “the earliest days of generative grammar” (two points and a bonus questions to anyone who identifies the source of this quote). Unifying movement, binding and control goes against the distinction between movement and construal that has been a fundamental part of every generative approach to grammar since Aspects (and before, actually), as has been the distinction between phrase structure and movement. However, much minimalist work over the last 20 years can be seen as chipping away at the differences. Chomsky’s 1993 unification of case as a species of movement or Probe-Goal licensing (PGL), the assimilation of control to a species of movement (moi) or PGL (Landau), reflexive licensing as a species of movement (Idsardi and Lidz, moi) or PGL (Reuland), the collapsing of phrase structure and movement as species of E/I merge, the reduction of Superiority effects to movement via minimality. All of these are steps in reducing the internal modularity of GB and erasing the distinctions between the various kinds of relationships described so well in GB. This unification, if it can be pulled off (and showing that it might be has been, IMO, the distinctive contributions of MP), would do for GB what Chomsky did for islands and the resultant theory would have a decent claim to being fundamental.

The second hurdle will be articulating some notion of computational complexity that makes sense. In ‘On Wh-Movement,’ Chomsky tried to suggest some computational advantages of certain kinds of locality considerations.  Whatever, his success, the problem of finding reasonable third factor features with implications for linguistic coding is far more daunting, as I’ve discussed in other posts. The right notion, I have suggested elsewhere, will reflect the actual design features of the systems that FL interact with and use it. Sadly, we know relatively little about interface properties (especially CI) and we know relatively little about how FL would fit in with other cognitive modules. We know a bit more about the systems that use FL and there have been some non-trivial results concerning what kinds of considerations matter. As I have discussed this in other posts, I will not burden you with a rehash (see here and here). Consequently, whatever is proposed is very speculative, though speculation is to be encouraged for the problem is interesting and theoretically significant.  This said, it will be very hard and we should appreciate that.

So, is Cedric a denier? Nope. He accepts the “laws of grammar” as articulated in GB as more or less phenomenologically correct. Is his strategy rational? Yup. The aim should be to unify these diverse laws in terms of more fundamental constructs and principles. Are people who quote Cedric to “épater les Norberts” doing the same thing? Not if they are UG deniers and not if their work does not aim to explain the phenomena/effects that GB describes. These individuals are akin to climate change deniers for their work has all the virtues of any research that abstracts away from the central facts of the matter. 

44 comments:

  1. Norbert: could I trouble you to provide a full list IN PRECISE FORM of "laws of grammar' as articulated in GB. Also, could you specify the difference between 'laws of grammar' and laws of grammar. Normally, the latter
    are taken to be propositions with truth values, and in fact, the value true.
    What about 'laws of grammar'? For extra credit, you might provide
    a reference to a work where each 'law' is characterized and presumably some justification is given for its 'law'-like status.

    Paul M. Postal


    Paul M Postal

    ReplyDelete
    Replies
    1. Paul M. Postal, what ARE you on about? Not content to fill the previously useful LingBuzz with weekly spam papers, now you inflict CAPITAL LETTERS on us too. For SHAME (in precise form).

      Delete
    2. What is suddenly wrong with capital letters? They have been acceptable on this blog until quite recently, I cite Norbert, January 8 blog:

      "Before saying a bit more, let me shout out very loudly that I AM NOT CONDONING MALPRACTICE AND DISHONESTY."

      But if you have an answer to Paul's questions that would be most welcome.

      Delete
  2. Linguitude...whatever I am on about, I am willing to put my own name
    on it,more than once even, by error. As for spam, sorry the concept
    eludes you. Spam is unwanted material sent to people who have not
    requested or ordered it. From Lingbuzz, to get a posted paper, one has
    to download it, that is, in in effect, order it. These points aside, I
    thought your comment was an excellent nonsequitur.

    ReplyDelete
    Replies
    1. Hi Paul. Sorry, out the whole day and only saw this now.

      I feel that the question you asked was intended to be rhetorical and that you don't think much of the idea of laws. I was thinking, of course, of something like the A-over-A principle (just kidding!!). No seriously, Precise form? I am not sure what you intend. I am happy with the formulations of GB as in Haegeman's many introductions. They are more than satisfactory for current purposes. Indeed my friends in CS departments seemed to have no trouble formalizing these yet more precisely, but to what end? So these are good enough.

      I introduced novel terminology 'laws of grammar' hence used quotes to indicate that. I intend them to be taken as laws of grammar. However, as this is not normal nomenclature, at least in linguistics, the quotes seemed condign. You probably read them as scare quotes. Cute, but not my intention.

      Truth: yes, they are roughly true, accurate to a good first approximation. Indeed some of them, e.g. cross over phenomena were, if I recall correctly, first described by you (nice work btw). These and local anaphoric licensing and principle C effects (an anaphor cannot c-command its antecedent) and islands and control etc are all reasonably interpreted as "laws" circumscribing what is linguistically possible, e.g. no reflexives without a local c-commanding antecedent, islands cannot separate antecedents from the traces they bind, and so on. These are laws in that they limn the possible, rather than just describe the actual.

      Justification? To my mind the data that are cited, e.g. by Hageman, are pretty convincing. There are some puzzles (e.g. reflexives within picture NPs) but by a and large they fit the main data, about as well as the ideal gas law fit real gases (a good analogy in my view).

      Does this answer your question? I doubt it, but why not start here.

      Delete
  3. So I think the root of our disagreement is in your final sentence of your post. What is the central fact of the matter? What is the central phenomenon that linguistics should explain?

    I am in a minority here because I think the fundamental empirical problem of linguistics is to account for language acquisition, and not to account for "island effects, principle A, B and C effects, weak and strong crossover effects, the PRO theorem, Superiority effects etc. "

    So from my point of view, GB (as reconstructed by you without parameters) is not phenomenologically correct because it does not account for the principal phenomenon to be explained. (or I guess using traditional terminology because it does not attain explanatory adequacy)

    From *my* point of view, a theory of grammar without a learning theory is abstracting away from the central fact of the matter, and is fundamentally inadequate as a theory of language.

    There is a quote by Keller and Asudeh that puts this very well: "A generative grammar is empirically inadequate (and some would say theoretically uninteresting)
    unless it is provably learnable. Of course, it is not necessary to provide such a
    proof for every theoretical grammar postulated. Rather, any generative linguistic framework
    must have an associated learning theory which states how grammars couched in this framework
    can be learned."

    I accept that this is a minority view but for instance Chomsky in 1973 says "The fundamental empirical problem of linguistics
    is to explain how a person can acquire knowledge of language". I think he was right then; I don't know if he still holds that view.



    ReplyDelete
    Replies
    1. Dear Alex,
      Since Norbert so kindly made sure everyone knows I am not a climate-change denier, let me try to make sure everyone knows he cares a lot about language acquisition (in fact, I think he cares more about it than most).
      You are not in a minority, Alex. The central problem is still language acquisition (in fact, some of us have been trying hard to relate Darwin's problem and Plato's problem), but I think it's wrong to say that the whole point is "to account for language acquisition, and not to account for island effects, principle A, B and C effects, weak and strong crossover effects, the PRO theorem, Superiority effects". You can't care about one without caring about the other. One cares about island effects, Binding principles, etc. not because they are intrinsically interesting (okay, maybe some people do, but if so, I feel sorry for them). One cares about these things because these principles make language acquisition possible.
      In an attempt to keep this reply short, let me quote a passage from a little-known paper by Andrew Nevins that frames the issue better than most:
      "Many critics of domain-general learning theories argue that domain- general architectures, such as neural networks, secretly “build in” a lot of innate structure, so that their simulations which appear to be learning complicated linguistic phenomena are doing so with a headstart. I want to argue that these critics are focusing on the partly-full nature of the glass, but that what is not there is even more important. What allows connectionist networks to succeed, when they do, is not what they have been built to bring to the task, but rather what they are specifically built not to bring to the task. It is for this reason that every connectionist network that has ever been programmed into a computer to learn, say, how the past tense is formed in English, through statistical tendencies, has already been built with no predicates or functions that can count the number of syllables in the input, with no representation of prosodic stress, and with no subroutines that can determine whether the word is palindromic. If connectionist networks kept statistics about every linguistic property inherent in the data, they would never be able to make any generalizations.
      The function of what is called Universal Grammar, then, is not really to provide a grammar, but rather to provide a set of constraints on what can and can’t be a possible grammar."
      (taken from Nevins, "Phonological ambiguity, What UG can and can’t do to help the reduplication learner; MIT Working Papers in Linguistics 48, 113-126
      Plato’s Problems: Papers on Language Acquisition)
      If you don't care about island effects, etc. you can't be claiming to care about language acquisition, because you'd be ignoring the conditions that make learning possible in the first place. (The principles guide the child: Don't do this, don't do that. That's why most of them have a negative format; cf. Chomsky 1973, which you mention: "No rule can relate X and Y ...")
      This said, I think that we would also like to know how exactly island principles, binding, etc. are biologically implemented. Here, I think, GB does not provide the right format (that's the point of the passage you kindly quoted from my work). Do 'minimalist' versions of these laws provide a better format? I think so, but I won't be trying to defend it here. Nor will I be trying to defend that minimalist versions of these laws ought to please many 'non-Chomskyan' linguists, because some of them converge with what those guys have been saying for a while. Topic for another post, maybe.

      --Cedric Boeckx

      Delete
  4. Thanks Cedric, that is helpful. I find myself generally in agreement with your UG from below strategy.
    But ...

    I think you are conflating the problem and the solution.
    If the problem is language acquisition, then yes, one solution is to build things like Principle B into the genome. That solves two problems, the acquisition problem, and the problem of why Principle B occurs in all languages. (assuming for the moment that Principle B correctly describes the observational facts and is universal).
    But it creates a new problem: Darwin's problem. How did this get into the genome? But yes, this is makes learning possible.

    But one certainly can be interested in the problem of language acquisition without thinking that this is the right solution or the only solution; indeed that is my position.

    From my perspective, Principle B is part of the problem, not part of the solution. How can we account for the acquisition of this non-obvious property? My answer -- I don't know.


    If there is a debate about what the fundamental problem is; say whether it is A or B, then this affects the appropriate research strategy. If we have model 1 which has a partial answer to A but no answer to B, and model 2 which has a partial answer to B but no answer to A, then if you think the fundamental problem is A then you would prefer model 1 and so on.

    So given a choice between a model that has a plausible learning theory but that fails to account for the acquisition of Principle B, and a model which has Principle B built in, but has no learning theory, then I prefer the former. But Norbert *clearly* prefers the latter. And this makes me question how sincere Norbert (and you and Chomsky) are about learning/acquisition being the fundamental problem.


    If you have a theory that completely fails to account for some problem A, and yet you are so sure that your theory is true that you call your opponents climate change deniers, then surely you can't think that problem A is *the fundamental problem*. You could only be so sure of your theory if you felt that problem A was peripheral and secondary.


    Alternatively, for example, Yoshinaka has shown that one can learn MCFGs, which are grammars with structurally sensitive movement, equivalent to Minimalist Grammars,
    (see a recent paper by Ed Stabler http://www.linguistics.ucla.edu/people/stabler/StablerEK12.pdf for some interesting discussion).
    So this model does not account for Principle B -- but it has the bones of a plausible learning theory.

    So the *methodological* point is: here we have what is in my view a significant step towards solving what I think is the fundamental problem, and I hope that one can develop this towards explanations of the *secondary phenomena*, like island effects. Maybe something like the Pearl/Sprouse approach, maybe a Sag/Hofmeister reduction to processing -- I don't know; it's not that I don't care, it's just not fundamental.

    So there is a choice between Stabler type MGs plus a Yoshinaka type learner,
    and Norbert's GB with no learning theory at all.
    And which you choose depends what problem you think is more fundamental.


    (Putting on my learning theorist hat, your Nevins quote is a fair comment on the limitations of late 80s neural networks; I am not a big fan of neural networks either but a *lot* has changed since then. In particular we now understand that one can generalise with a huge number (indeed with an *infinite* number) of features if one controls what is called the 'capacity' of the learning machine. So that argument is based on a technical assumption which seems to be false in general, even though it may apply to some learning algorithms).

    ReplyDelete
    Replies
    1. Norbert here with just a brief comment: It seems to me that you are agreeing that something with the effect of Principle B accurately describes acquired grammars, at least to a first approximation. Good we agree. Now what you don't like is postulating Principle B as innate. Given minimalist scruples, I don't like it either, but I believe that this means that I need to find a way to replicate its effects in some more acceptable minimalist way in order to have an adequate account. Now, in this case, I have actually worked on some alternatives and think it may be possible to have the effects of principle B without principle B in a system where all dependencies are products of Merge (E- for local dependencies and I for non local ones). If this can be done, then we can have B effects without an explicit statement of B.

      Note that this just restates the strategy of treating GB as an effective theory. These are targets of explanation by a fundamental theory. If we can all agree that GB is effective in this sense, then we can all agree that the next step is to find a better story that gets the effects of B (and the rest of the binding theory) without explicit statements of these principles.

      Last point: at any given time we are involved in multiple projects. It is perfectly reasonable to say that at present the story I am telling has nothing deep to say about X. That's par for the course. We of course all pray for messianic times when all will be revealed but... However, it is another thing to say that how to explain these phenomena is not part of the project or that our project can ignore these forever or that someone who takes these problems seriously is wrong. Nope. That's not playing fair.

      Last point: Norbert has a learning theory. He even outlined what a GB learner would have to learn. What you don't like is that the hypothesis space is very highly structured and that all that needs learning is what is a pronouns and what an anaphor. This I suspect can be learned by a pretty stupid Bayesian learner. As all else is "innate" that solves the learning of binding phenomena problem. You rightly observe that this raises Darwin's Problem. Yes, that's why I would like an alternative. But it's a bit unfair to say that there is no learning theory. It's jsut that the learning theory given a rich theory of UG might be pretty trivial.

      Delete
    2. Thanks Alex for the reply. Quick reactions:
      1. If your answer to the problem is "I don't know", you won't be surprised to hear that I don't buy it.
      2. I am not confusing the problem and the solution. In fact, your reply suggests I am not: "If the problem is language acquisition, then yes, one solution is to build things like Principle B into the genome. That solves ... the acquisition problem"
      3. The quoted passage attributes to me something that I did not say. I am referring here to the "genome" part. One should not confuse nativism and geno-centrism. Non-genomic nativism is possible, and that's where some of my money is (I think some of Chomsky's money is there too, if I read him correctly about "third factors")
      4. I grant you that we have to worry about Darwin's problem (or, more generally, the biological implementation of FL). And it's tough, and I think you and I agree that whatever the solution to that is, it won't have a GB format. Could it have a minimal(ist) format? That's in fact where everyone's money is these days, Chomskyans and non-Chomskyans alike. Could the solution to that problem amount to lots of generic properties (shared across species and cognitive domains)? I very much think so, but as you know tinkering in biology could give rise to interesting interactions of old parts in new configurations giving rise to emergent phenomena. Could these emergent things do the work of GB principles did? I think so (although here I may be in the minority). This is what I had in mind when I wrote the last paragraph of my earlier reply.
      5. Regarding this part of your reply "If there is a debate about what the fundamental problem is; say whether it is A or B, then this affects the appropriate research strategy." You allude to "a model that has a plausible learning theory", which model is that? (remember point 1, you can't tell me "I don't know") And (if such a model exists), are you sure it's not compatible with minimal(ist) positions in my point 4? The plausible learning theory will have to have some priors (biases, etc.). Could these be what minimalists are working towards? I once heard Eric Reuland say that GB laws are too language specific/modular to be true [he is right], but he also said that GB laws are too good to be false. Norbert likes to insist on this (I see he already posted something to this effect).
      6. Your point "this makes me question how sincere Norbert (and you and Chomsky) are about learning/acquisition being the fundamental problem" You are right in a sense, I'd say (I think Noam and Norbert would agree) that "learning/acquisition" is not the right way to formulate the fundamental problem, "development/growth" is. I think Paul (Pietroski) posted on this theme, so I won't expand on it here.

      Delete
    3. Cedric:

      1) Yes, I understand. But all theories have only partial coverage and only explain a subset of the phenomena in question (e.g. dark matter in modern cosmology). Does GB have an adequate account of Suffixaufnahme for example? No, but that is not a reason to reject it completely. Rather we compare partial and incomplete theories on how well they account for what we think are the fundamental
      phenomena. Now for me, acquisition is fundamental and Principle B is not. Indeed for me Suffixaufnahme is more fundamental than Principle B (long story, but see below).

      2) Agreed: the problem is language acquisition and not to be confused with solutions of which there are *potentially* many, even if you only like one.

      3)
      Non genomic nativism and 3rd factor principles -- these *really* need some fleshing out to get beyond hand waving. I am receptive to these ideas in principle and if you have some pointers to a technical literature on this then I am all ears.

      4) Absolutely -- one of the reasons I am on this blog and making a nuisance of myself is because I like the rhetoric of MP about minimising UG. I have doubts about how this is cashed out though in detail.

      5) I had in mind the sort of general learning theory I sketch in
      e.g. My invited paper at ALT 2010 -- "Towards general algorithms for grammatical inference",
      which leads to specific algorithms like that described in "Beyond semilinearity: Distributional learning of parallel multiple context-free grammars" with Yoshinaka in ICGI 2012 (which can learn suffix stacking).
      Both available online.

      So to be clear these are again partial and incomplete, and not the whole story -- they need other components (a stochastic component,
      a component that turns them into strong learners and so on), but these are sketched in other papers, some under preparation, but I can send you drafts if you want.
      And personally I think that it is, as you conjecture, highly compatible with minimalist principles, but it has one major sociological problem -- the techniques derive from American structuralist linguistics and so it is absolute anathema for generative linguists of a certain generation.

      6) You are correct that we discussed this earlier -- I though everyone was ok with acquisition as a neutral general term.
      Bear in mind that what I am talking about includes learning the lexicon, morphology as well as syntax so may
      be much broader than what Chomsky and you are talking about. Even Chomsky accepts that the word "learning" is appropriate for lexical acquisition (e.g intro to LGB).
      So again I accept in principle the possibility of 'brute-causal' models of language acquisition --e.g. Paul's nice example of a butterfly wing colour being triggered by temperature. But again that is just a logical possibility not a proposal,
      and I thought triggering had largely been abandoned (along with parameters). And in the absence of some details, I don't think the terminology matters much.

      Delete
  5. Norbert:
    Because of length this comment requires n posts


    Norbert:
    It was good of you to take to trouble to respond at some length to my obviously
    outsider comments. A few points:

    (1) on the minor point of “laws” vs/ laws. When someone takes the trouble to burden their text with quotes around term X, I take it there is a reason, and in particular, the intention to distinguish “X” from X, usually involving some kind of hedging. But you say not and “laws” are just laws. Fine.

    (2) Then you conclude with no basis that I ‘don’t think much of the idea of laws’. Of
    course I do. You continue by pointing to unspecified friends in unspecified CS departments who have no trouble formalizing GB principles ‘more precisely’ in
    unspecified work. The following ‘but to what end’ expresses a certain disdain, does it not, for the importance of precision.

    On that point, apparently your enormous admiration for Chomsky’s work nonetheless
    leaves you unimpressed with the following declaration:
    “Precisely constructed models for linguistic structure can play an important role, both
    negative and positive, in the process of discovery itself. By pushing a precise but
    inadequate formulation to an unacceptable conclusion, we can often expose the exact
    source of this inadequacy and, consequently, gain a deeper understanding of the
    linguistic data. More positively, a formalized theory may automatically provide
    solutions for many problems other than those for which it was explicitly designed.
    Obscure and intuition-bound notions can neither lead to absurd conclusions :nor provide
    new and correct ones, and hence they fail to be useful in two important respects. I
    think that some of those linguists who have questioned the value of precise and
    technical development of linguistic theory have failed to recognize the productive
    potential in the method of rigorously stating a proposed theory and applying it strictly
    to linguistic material with no attempt to avoid unacceptable conclusions by ad hoc
    adjustments or loose formulation.” Syntactic Structures, page 5.

    Curious that. While my admiration for the author is currently a bit less than yours, I have nonetheless from the beginning found the content of this quote to be entirely correct and enormously important. Hence I never take the request for precision to be frivolous for the reasons he gave.

    (3) Still, we haven’t gotten to substance or any laws. In this area, I find your prose
    maddeningly vague and allusive. While you bothered to produce a 314 word response, Instead of writing down some law(s), you instead refer the reader to unspecified works of Haegeman. I think I used to have one but gave it away, and have no access to any currently so this is not terribly helpful.

    (4) Then you get to something with some substance.

    “Indeed some of them, e.g. cross over phenomena were, if I recall correctly, first described by you (nice work btw). These and local anaphoric licensing and principle C effects (an anaphor cannot c-command its antecedent) and islands and control etc are all reasonably interpreted as "laws" circumscribing what is linguistically possible, e.g. no reflexives without a local c-commanding antecedent, islands cannot separate antecedents from the traces they bind, and so on.”

    ReplyDelete
    Replies
    1. Hmm, I thought that a reference to Hageman would have sufficed, but I guess not. As for CS formalizations, you no doubt know Fong's early implementations of GB. This is what I was referring to.

      We agree on our admiration for a certain author. However, though I value precision, I actually also believe that how precise something needs to be stated is a function of why you are interested in it. As continuing this sort of discussion will not enlighten, I will move onto your second post.

      Delete
  6. This passage is intended to provide the content of your claim that there are GB laws of grammar. Let’s go over them:
    a. some of them, cross over phenomena,
    Comment: phenomena are not laws. So this is a null response. Moreover, the persistent claim that Principle C explains the strong crossover phenomena is the subject of an entire chapter of my 2004 book, Skeptical Linguistic Essays. This argues that the claim is untenable. Never responded to by anyone as far as I know.
    b. local anaphoric licensing. This is explicated as: “e.g. no reflexives without a local c-commanding antecedent.”
    Comment: I take the latter phrase to be the law. It is far from clear since from the beginning, no characterization of ‘anaphor’ was given independent of associated principles like this. I ignore that. What I would say, in fact have said in an article with Haj Ross, is that the claim is false. ‘Inverse Reflexives’ in the 2009 festschrift for Terry Langendoen: Time and Again, John Benjamins, Amsterdam. Also never responded to as far as I know. In it we describe French, Albanian and Greek simple
    clauses which arguably violate your formulation. These cases reveal inter alia a
    generalization. Roughly, the claim is not valid in general when there is a ‘derived’ subject (as in e.g. passives) which is reflexive with the antecedent in some nonsubject position. Interestingly, one can see this reflected even in English. The pair:
    (1) *Herself was described by Harriet to Arthur.
    (2) *Herself described Harriet to Arthur.
    work just the way your formulation claims they should. But consider:
    (3) It was herself that was described by Harriet to Arthur.
    (4) *It was herself that described Harriet to Arthur.
    One sees the same effect manifested in the non-English simple clauses I referenced.
    When one fills in the traces that the views under discussion posit, one will see that your principle claims that (3) is like (4), when it manifestly is not.
    c. Principle C effects: an anaphor cannot c-command its antecedent.
    This comes reasonably close to having a law like character.
    Alas, it also crashes against (3).

    A last point on reflexives, etc. You mention ‘a few puzzles’, citing the well-worn picture noun cases. The subtle implication is that this pretty much covers it. Cases
    like (3), etc. aside, this is entirely wrong. There are massive numbers of puzzles in many languages...for instance, the whole now large literature on ‘long distance
    reflexives’ consists of such, these being cases which the principle you state in effect says don’t exist. I might add that although it is often suggested that English lacks
    long distance reflexives, it in fact has a variety of them beyond picture noun cases:
    a sample:
    (5) Winston claimed that himself, ordinary people could never understand.
    (6) It was herself that Mary claimed Tod did not understand.
    (7) My book compared no book other than itself to your book.
    (8) Claudine treated you as inferior to herself.
    (9) That author claimed there would always be himself for you to count on.
    (10) No woman believed that anyone but herself deserved the position.

    Many such cases are discussed in article by me entitled ‘Remarks on Long-Distance Anaphora in English’, in the hardly field-centric journal Style, Volume 40, 2006. It
    was part of an odd festschrift for Haj Ross.

    d. islands cannot separate antecedents from the traces they bind,
    There is obviously something right here..but this ignores issues of weak/selective islands and all their associated problems. One point is that the particular
    formulation has a GB flavor but of course the basic idea goes back to Haj Ross’s
    thesis and had nothing to do with GB.

    Sorry for the megaverbosity but at least I didn’t use too many CAPS.

    ReplyDelete
  7. It seems that we agree that Principle C has a rough law like character (but for (3) to which I return anon). That, I take it means, that it is empirically adequate over a pretty wide domain of cases and there are some apparent problems. This, so far as I know, is often the case with laws of nature; again think the Gas Laws, the Germ Theory of disease (i.e. germs cause disease), Newton's laws etc. Most laws that are not fundamental have exceptions. The question is always whether this vitiates matters or is to be tolerated as an anomaly, noted and we move on. For my current concerns, I can tolerate an anomaly or two, though I would love to see them solved. You know, 10% empty, 90% full. So, it looks like we agree more or less on principle C (which, btw, has the standard Evan's counterexamples as well which requires rethinking of what antecedence is).

    Ok the anaphor binding cases. Again, we agree over a pretty large domain. the cases you bring up are interesting, less the ones in (5)-(10) than the ones in (1)-(4). The latter interest me less because I am not sure I believe that they are "true" anaphors. For example, for me, these are not in complementary distribution with pronouns, something that I take to be a diagnostic for "true" reflexives. As these seem fine to my ear with pronouns replacing reflexives, I wills set them aside.

    Ok the first four cases. First, with focus on 'herself' in (1) I get the same judgments as I get for (3). They are not as bad as (2) and (4), though without the focus on the reflexives, they are not terribly good either. Have you done a careful evaluation of the judgment? I agree with the contrast, but how good is (3)? I ask because neither one of us is famous for the quality of his judgments. At any rate, say that it is good. Yes, it owuld be a puzzle for principle A as standardly stated, hence worth thinking through carefully. However, here's one thing one should not do: throw out principle A because of this as then what do you do with the standard cases, cases where the data is quite a bit crisper than here? Where does this leave me? I agree it would be nice to refine principle A so that it accommodated (3) (and (1) with focus). That said, the refinement will leave the core cases the same so I conclude that A is roughly correct.

    Islands? Ok let's credit Ross (which I believe I did by calling them Ross's islands). All GB did is reanalyze these in terms of subjacency. You probably don't think this an advance, I do, but there are lots of problems with the account even if for someone like me. Selective islands (mainly Wh and neg islands) are interesting and as you know, at least for Wh islands, even the standard theory of subjacency needs a separate assumption to bring these in line with the strong islands. So there is no true entirely unified treatment. At any rate, I am happy with Ross's description which, again, I take to be a pretty accurate depiction of a law of grammar.

    I might add that I am glad that you think that these should be treated as laws. A point of terminological agreement.

    ReplyDelete
  8. Interesting discussion, I have 2 questions for Norbert:

    1. It seems you and Paul disagree about what qualifies as 'law of grammar'. Would it be fair to say that you have the kind of laws in mind Cedric Boeckx [2009] called Galilean [and contrasted with Aristotelean]? Or something even less definite - we could call them Darwinian based on the paraphrase of his species definition: a species is whatever a competent naturalist wants it to be?

    I ask because it is not entirely clear to me from this passage:

    Most laws that are not fundamental have exceptions. The question is always whether this vitiates matters or is to be tolerated as an anomaly, noted and we move on.

    Correct me if I am wrong but it seems to suggest that there are at least 2 different kinds of laws: fundamental laws and non-fundamental laws. It seems the laws you currently are interested in all have exceptions there are some that are fundamental and those do not have exceptions? If this is the case, are there any fundamental laws of grammar? And if so can you provide examples? For me just names will do.

    2. You note that you and Paul have different judgments about some of the cases Paul raises. This reminds me of the earlier discussion about evolutionary issues. Assuming you agree that both you and Paul are highly competent native speakers of English how can we explain the difference in your judgments? Is there a difference between your respetive I-languages that somehow [never mind details here] is manifested in the genome? Or are those merely performance differences and you both share the same I-language in spite of the apparent differences in judgement about important grammatical issues?

    ReplyDelete
    Replies
    1. I have nothing sophisticated in mind, just basic philo of science 100 stuff. There are laws at many levels. Fundamental laws will be exceptionless and will explain why we get the apparent exceptions we find in the non-fundamental laws. One of this nice indications that we are getting somewhere is when an exception to an apparent law gets explained at the more fundamental level. To my mind, the laws as outlined by GB are effective, not fundamental. I don't like exceptions, but at this level I would not be surprised to find some. As I said, this happens in the real sciences (i.e. physics) regularly and the world does not come to an end. In current linguistics, I think that principles A and B are not fundamental. There are many anomalies. Hopefully as we understand things better and get better accounts (some minimalist ones come to my mind as I write) some of these anomalies will be explained in a principled fashion.

      So 2 different kinds of laws? In a sense: there are laws of a fundamental theory and those that are approximately true in an effective theory. We hope that the approximations are explained as we get more and more fundamental.

      As per 2: Alex Drummond is likely correct. No particular Gs are the same. They are the product of many factors. So all UGs might be the same with any two Gs being identical. As for judgments, they are influenced by a huge number of factors (see Jon Sprouse on this, a.o.) and so even with similar Gs we may not get identical judgments. This is no surprise. It's true for hearts, and kidneys too. So, different judgments, no biggie, I don't think.

      Delete
    2. Thanks for this. Just to make sure i understand you correctly. You write:

      "One of this nice indications that we are getting somewhere is when an exception to an apparent law gets explained at the more fundamental level."

      In this case the apparent law turns into an effective law?

      "As I said, this happens in the real sciences (i.e. physics) regularly and the world does not come to an end."

      Does this mean linguistics is not a 'real science'? What kind of science is linguistics then?

      "There are many anomalies. Hopefully as we understand things better and get better accounts (some minimalist ones come to my mind as I write) some of these anomalies will be explained in a principled fashion."

      You say here that there are many anomalies and that hopefully at one point some of these will be explained. Lets hope so indeed, though I would feel better being given a concrete example in which anomalies actually have been explained in a principled fashion. But what about the other anomalies that are still not accounted for at that later point? If there is always something that is left unexplained by your laws of grammar how do I know they are the right ones vs. some other possible laws?

      I also note that you have not answered my question for any currently known fundamental laws. Does this mean there are none [currently known]?

      Lastly [I am citing here from Chomsky's "Lectures on Government and Binding"]:

      "In many cases that have been carefully studied in recent work, it is a near certainty that fundamental properties of the attained grammars are radically underdetermined by evidence available to the language learner and must therefore be attributed to UG itself" [p.3].

      This was published in 1986 but "is based on lectures [Chomsky] gave at the GLOW conference...in April 1979". [p. vii]. One would assume the 'recent work' has been completed 35 years ago. Given that Chomsky also speaks of a "rapidly developing field" [ibid.] surely in those 35 years we must have learned enough about the fundamental properties that must be attributed to UG itself to be able to name at least some of them and give a principled account.

      You say "We hope that the approximations are explained as we get more and more fundamental." but according to Chomsky we had already in 1979 near certainty about fundamental properties of UG and the field has been rapidly developing since. So why so hesitant when I ask for the names of a few fundamental laws?

      Delete
    3. I can see how I confused you, sorry. Effective laws may not be perfect but they are likely pretty good. They may have anomalies. These, hopefully, will be resolved when the effective laws are accounted for in a more fundamental theory. The "final theory" (Weinberg nomenclature) all laws will be perfect. However, a mark of one theory being more fundamental than another is that the anomalies of the latter are explained in those of the former.

      Linguistics is a science. However, I sometimes, tongue firmly in cheek, distinguish between the "real sciences" (aka, physics and parts of chemistry and molecular biology) and everything else. Why, because they have made serious discoveries of real depth. I also think that linguistics has made some discoveries of real depth, but as I get a sore neck when I pat myself on the back, I don't often fess up to this. At any rate, take 'real science' to mean roughly physics.

      Fundamental laws in linguistics? I have a few candidates: e.g. structure dependence: all syntactic processes are structure dependent. Another that smells plausible to me is a version of relativized minimality (can't involve X and Y over a Z with same features). I have further proposals of my own but I am pretty sure that these would not be widely shared. If there is a fundamental theory out there, it looks to me like it will look minimalist. The problem is that there are relatively few minimalist theories out there. One of the things I am less in sympathy with than I use to be is the idea that minimalism is a program, not a theory. True so far as it goes. But if the program is to be fecund it had better generate some theories. I have a couple of proposed dogs in this hunt that I think do pretty well. But this judgment is very controversial (and likely self serving). So, best to say, that at present, we have very few well grounded fundamental laws, which is precisely why we need to start looking for more.

      Last point: physicists have been looking for a fundamental theory since it was discovered that relativity and quantum mechanics were incompatible at very short distances. They have been looking for over 50 years, if not longer. SOme questions are hard. The ones in linguistics were very hard for we know so little about the interacting mental modules. However, we should aim high even when we understand that the problems will be hard to resolve. I think we have found out a lot since LGB, but I believe that we are still quite far from figuring out what the fundamental laws are.

      Last point: I find the attitude you convey in the last two paragraphs of your note odd. Why would you think that a field that is all of 60 years old should have reached the point of knowing its fundamental properties? WOuld you use the same standards for biology or even physics? Remember, linguistics is a very young science and standards would be appropriate to the age of the enterprise and the difficulty of the problems. C you often ask too much and therefore miss what's been accomplished. A modicum of modesty and a little charity in judgement would allow you to appreciate just how much has been discovered.

      Delete
    4. Thanks for the clarifications. I am glad to learn that linguistics IS a real science, just maybe a bit younger than physics. But you sell yourself short as "a field that is all of 60 years old". In one of my favourite works [Cartesian Linguistics] Chomsky writes that his own work is to in important ways a “rediscovery of much that was well understood in [the Cartesian] period” (Chomsky, 1966, p. 1) and he continues to stress his indebtedness to these roots by saying that “this return to classical concerns has led to a rediscovery of much that was well understood in...the period of Cartesian Linguistics” (Chomsky, 2009, p. 57).

      Further, it was you, not me, who suggested the parallels to physics:
      "My friends in physics ... make a distinction between effective and fundamental theories. Effective theories are those that are phenomenologically pretty accurate. They are also the explananda for fundamental theories. Using this terminology, GB is an effective theory, and minimalism aspires to develop a fundamental theory to explain GB “phenomena.”

      Now, given that you are right that physics has a much longer history than generative grammar. maybe it would be better to compare the latter to say genetics that had a similarly major breakthrough as Chomsky's SS in April 1953 when James Watson and Francis Crick published the paper that presented the structure of the DNA-helix - which gave rise to modern molecular biology. So maybe you can situate the discoveries of generative grammar in the context of those in molecular biology?

      And one last question. Towards the end of the Pisa lectures Chomsky remarks:

      "Evidently this is only a sketch of a true formalization. Further elaboration is necessary for FULL PRECISION, and it is necessary to ensure that other properties of the system are so narrowly and specifically constrained that they do indeed entail the consequences outlined. But the basic properties of the system we have been developing are incorporated into this account and IT IS FAIRLY CLEAR how to proceed to fill the gaps" {Chomsky, 1986, p. 335, my emphasis]

      So as in the quote Paul gave Chomsky emphasizes the need for precision and he also states that it is fairly clear how to fill gaps that existed in 1986. i am sure these gaps have been filled by now and you are in a position to share a precise account?

      Delete
    5. LGB wasa precise fiction of the content of the Pisa lectures. Pretty decent versions are in Hageman's various intro texts. There is a formal version good enough for CS reasons in Sandiway Fong' s work, formal enough to run the programs and get the right results for a large range of cases. Frankly making the various proposals precise enough for this has not been a big problem, ever e.g. Marcus did this for parts of the theory of movement including subjacency, as did Berwick for various aspects of grammar as has Stabler. The theories are not hard to make precise. However, there are classes of puzzles, like some Paul noted, that persist and the problem is not making things precise.

      Comparisons to modern post W&C biology? I don't know tons about the latter, but my understanding is that we are still pretty much in the dark about the high level details of the code. We k ow the very basics, but not how information, say about a given trait e.g. Height, body mass, gait style are actually coded in the genes. Similarly in linguistics, we know a lot about the basics. Of phrase structure the basic operations, some features of what are likely innate, the class of grammatically significant dependencies, but a lot is still very obscure. We don't know much at all about how brains code language, but that's true in virtually every domain of cognition, not just language. Indeed we don't know how object centered descriptions are coded.

      In both language and genetics we are pretty much unclear about how specific organisms use the DNA instructions/ UG to grow. These are questions at the frontier. We know what's used, but details of how are obscure.

      I know that you asked your question very to gue in cheek. Good. I've always thought the analogy to modern biology would be a good one. Gallistel has farmed this intensively. I suggest you look at what he has to say especially about the relation of the mental and brain sciences.

      Delete
    6. Damn auto correct. 'Precisificatuon' became 'precise fiction.' Hmm come to think of it maybe this was divine intervention.

      Delete
    7. Actually I was not joking at all and it's not clear why you would think so. i had expected to hear something about an accomplishment comparable to say the completion of the human genome project in 2003. Obviously in any working field we do not expect final, curtain drawing kinds of results but we do expect measurable progress. 50 years ago using genetic evidence to solve crimes would have been impossible, now it is routine. I can use genetic tools to trace my ancestry etc. etc. Surely some results from work in generative grammar must have found applications on a similar scale by now - can you just give an example or two?

      Regarding the precision issue; you are of course right that if persisting puzzles are already detectable at a fairly coarse level of analysis we do not need more precision. What we need is to find a way to either eliminate the puzzles or accommodate our theory. So is there any [even if currently still tentative] suggestion how to go about eliminating the puzzles you mention?

      Delete
    8. Oh, technological applications? There are a few, though not many. There is recent interest in using grammar more effectively in automatic translation. Oddly, even Jelenik was partially converted to its utility. There are also low level grammar checkers in your word program. But none of these use much in the way of generative grammar. So, if that's the bench mark, I know of very little of interest. Interestingly, however, it's seems the same was true of celestial mechanics after Newton. The big problem was finding latitudes. They tried everything they could using celestial mechanics. Turned out what you needed was good portable clocks. I also hear that general relativity has exactly one application to its name, the GPS systems.

      So, I a have nothing to offer on that front, sadly. It would be good for linguistics if that were not so. But right now it is, so far as I know.

      As for eliminating the anomalies, yes there is some work on some. The stuff on logophoric reflexives has resolved some of the problems, in my view. If this is a category then some of the apparent principle A violations are eliminated. Some of the problems on island, especially stuff on apparent strong island violations in Swedish have been reanalyzed by Dave Kush as small clause complements. There has been tons of work on weak islands and some by Szabolcsi is very useful in making the cut Paul hinted at. These would not be standard island violations at all. So yes there has been work resolving some of these problems.

      Delete
    9. Oh one last point. There is an equivalent of the human genome project (intellectual equivalent, not monetary) which is the cartography project in Europe. Rizzi, Cinque etc have done a lot of filigree comparative work in comparative grammars on a lot of languages. And I am told that Cinque's work suggests something like a universal base hypothesis; phrases nest in the same way cross linguistically, I.e. the functional hierarchy is the same. If correct this is very interesting and is genome like in its breadth. This stuff is easily accessible and a lot of syntax is being done on this sort of project.

      Delete
  9. Regarding (2), not all variation in I-languages has a genetic source. The environment has an effect too. That's why people acquire different languages depending on the linguistic environment they grow up in. P and N didn't grow up in the same linguistic environment, so it would not be surprising if there were differences between their I-languages. That being said, it's also entirely possible that their I-languages are the same in the relevant respect, and that some additional "performance" factor is responsible for their differing acceptability judgments. To my mind, Poverty of the Stimulus considerations make this second option the more plausible one, but that's just a hunch.

    ReplyDelete
    Replies
    1. Thanks for this. I worry a bit that if you are right we have no way of finding out if there actually IS a common I-language [as i always understood Chomsky to claim] or not. In other words the I-language theory becomes unfalsifiable.

      But my main concern at the moment was really whether there are any fundamental laws of grammar - I guess we have to wait for Norbert to enlighten us on that issue

      Delete
    2. Chomsky has never claimed that every adult has the same I-language. That would be inconsistent with (e.g.) the fact that some people speak English whereas other speak French, Japanese, etc. He assumes that every person's I-language starts out in the same initial state. This initial state is modified as the child develops, partly on the basis of linguistic experience. The hypothesis that the initial state is the same (or nearly the same) for everyone is not really central. It's just a useful and very plausible idealization.

      Let's start by noting that we belong to a certain species, and we can assume uniformity, idealizing away from variation. This species has what we can call an initial state, that is, a state prior to experience, fixed for the species called S₀. We discover through investigation that in particular cognitive domains, for example in the domain of language...the individual goes through a series of states and reaches what is in effect a steady state [which] in the case of language is invariably attained about the time of puberty. We can then ask ourselves what is the nature of the steady state attained and what must have been the character of the initial state for that steady state to be attained, given the nature of the existing experience. (From “Language and Learning: The Debate Between Jean Piaget and Noam Chomsky.”)

      The ‘steady states’ can of course vary between individuals, since their properties partly derive from the experiences of the individuals in question.

      Delete
    3. I was only talking about Paul and Norbert, so what you say about French and Japanese seems utterly irrelevant. If their steady states are sufficiently different that even after careful consideration they cannot agree on the examples given by Paul it would be interesting indeed to find out what underwrites this difference. But it appears that then 'experience" must have a very substantial impact on said steady state - a lot more than 'triggering' suggests or knowledge obtained "without training or relevant evidence" [Chomsky, Knowledge of Language, 1986]

      Delete
    4. I was only talking about Paul and Norbert, so what you say about French and Japanese seems utterly irrelevant.

      It's the same thing on a smaller scale. People who grow up in French/Japanese-speaking communities have quite different linguistic experiences, so they end up with quite different I-languages. Paul and Norbert had slightly different linguistic experiences, so they ended up with slightly different I-languages. That may or may not be what is responsible for their differing judgments in the case at hand. (The other options are, roughly, genetic differences, developmental differences, or ‘performance’ factors.)

      I'm not quite sure what you're getting at in the last sentence of your post. Chomsky's claim is that some of our linguistic knowledge is obtained “without training or relevant evidence”, not that all of it is. Did Paul and Norbert learn to use reflexives differently on the basis of training or relevant evidence? I doubt it, but if they did, that does nothing to undermine Chomsky's take on language acquisition.

      Delete
    5. What you're talking about is semantics rather than syntax, isn't it?

      Delete
    6. Actually we were talking about syntax. And what puzzles me is that we are talking about examples that were suggested by Norbert as 'laws of grammar'. So I would have expected them to be at the core of grammar not at the periphery. But when it is possible, as Alex says, that "Paul and Norbert had slightly different linguistic experiences, so they ended up with slightly different I-languages." concerning something as important as laws of grammar, then I really worry about what is left [so to speak] 'in' the LAD? Alex says "Chomsky's claim is that some of our linguistic knowledge is obtained “without training or relevant evidence”, not that all of it is." Fine, but if not even stuff that Norbert considers so important to call it 'laws of grammar' is covered under 'some knowledge', then what is?

      Delete
    7. There are no laws of motion because given different initial conditions bodies move in different ways? As a practical matter, we can assume most I-language is the same. We do this by assuming an ideal speaker hearer and trying to fathom its I-language. We appreciate there will be some variation, but it is not often enough to derail matters and we concentrate on what overlaps. If we are interested in the differences, we must do some extra work, much as in other domains of biology (the heart vs hearts). This does not threaten to bring down the enterprise anymore here than elsewhere.

      Delete
    8. Syntax or semantics? Please, consider this:
      (1) She'll do it herself.
      (2a) She'll do it for herself.
      (2b) She'll do it to herself.
      In my pidgin English, the ”myself”s in (1) and (2) appear to have different meanings. No doubt myself in (2) is reflexive, but what about that in (1)? The meaning of “myself” in Paul Postal’s (3) is the same as in my (1).

      Delete
    9. Actually Paul Postal's (3) is of the same kind as your (2a) and (2b). Your (1) is called 'emphatic reflexive and of a different kind than the others. But I am not much of a syntactician and leave it better to Norbert to explain the details

      Delete
    10. Yes it would have been loverly to have it explain from Norbert.
      You’re right in that the meanings of ”herself” in (1) and (3) differ:

      (1) Harriet will do it herself. <--- Harriet will do it without help of others.
      (2) Harriet will do it for herself. <--- Harriet will do it for Harriet.
      (3) It was herself that was described by Harriet. <--- It was Harriet that was described by Harriet.

      Yet I can’t help feeling that those in (2) and (3) differ, too. Even if they do not - in such a case you’re right completely - there still is a difference in its syntactic properties: “herself” in (3) is nominative while that in (2) is accusative. Can a nominative phrase be an anaphor? Norbert, pls, help.

      Delete
    11. There is, I believe, a focus effect in Paul's examples. That's why I added that I thought that his simple passive case improved. A lot with focus on the reflexive. This may be significant for focus reflexives may pattern more like pronouns than real reflexives. English reflexives apparently historically spring from a focus marked pronoun. At any rate, that's what I was pointing to. However, this needs more analysis, much more.

      The cases you describe do indeed differ. There is a use of the reflexive which means, roughly, alone, or by oneself. These are also interesting and have locality restrictions. But these may be reducible to locality restrictions on adverbial modification quite generally.

      Delete
    12. Thank you. I should have got it from your Jan 19, at 11:28 AM. It’s the pragmatics, stupid! (It’s myself who I’m addressing that.)

      Well, Christina, they seem to not respond to things like this just because these are not at the top on their priority list.

      Delete
    13. sorry I don't know what 'they' and 'their' refer to

      Delete
    14. The Ms (M stands for minimalist, or mainstream, whatever you prefer). I’ve tried to figure out what’s between them and the rest of the field. Why constructionists consider HPSG, LFG etc. CxGs while Ms take them to be GGs. For example, why Jackendoff considers himself a generativist, while Goldberg sees him as a constructionist (in fact he is both; as a matter of fact everyone seems to be a constructionist to some extent). Of course, I was trying to learn a bit about the subject matter, which in fact took me most of the time. For I’m too slow to follow and if I get it I forget it. Still haven’t managed to get through the latest version of your Potpourri, to give an example. And there's just another post out here!

      Delete
    15. Well things should not be presented so complicated [or obscure] that you have to struggle to understand what the issues are [as opposed to understanding every last detail of cutting edge work]. Not every field is like quantum physics and really great ideas usually can be communicated to most who are interested. Look at Chomsky's Syntactic Structures or his Review of Skinner. You can easily see what his main points are and why they are important. That was brilliant work. Now compare to "The science of language" - take his answer to McGilvray's question about his intellectual contributions and try to figure out WHAT he takes his contributions to be. He may think he made really great contributions but his answer does not reveal what they are. And this can be said for most of the book - a work that is aimed at the general public...

      Delete
    16. I haven't read the book yet. I really want to read your critique first. I did read that of Pullum but it didn't help.

      Delete
    17. Pullum had a very limited amount of space to say something about the book. It is virtually impossible to say a lot of helpful things in such a setting.

      Delete
  10. I think Norbert's conception of the difference between GB and MP is a good way to think, but would like to add that some people think that the big pile of somewhat organized data that's been produced by GB, HPSG, LFG etc needs a great deal more 'curation', possibly (much) more than it needs attempts to explain it.

    Many of the empirical generalizations are questionable (fixed subject constraint seems pretty well demolished (ruins surveyed by Asudeh in his 2009 paper), principle B has overt apparent counterexamples in many languages, and the typology of bound pronouns is very complex), many of the supposedly required principles got installed at a time when people greatly underestimated what learning could do at least in principle, there are big architectural problems with many of the formalisms and gaping holes in our conception of what they ought to cover (GB/MP still doesn't seem to have any sensible analysis of 'case-stacking' as first described to the world by Dench and Evans 1988, and discussed in various publications since then (the phenomenon seems to me to fatally crash Baker's 2008 ideas about how concord works, Erich Round's thesis from 2009 is the latest, bigggest, and best piece of work on it afaik), nobody seems inclined to think seriously about how the knowledge behind linguistic variation is represented by speakers and used in production (the focus in computational linguistics is on getting the best parse for utterances in a corpus, not producing output with the kinds of statistics that the actual corpora have).

    ReplyDelete