Comments

Saturday, March 14, 2015

Defeating linguistic agnotology; or why very vigorous attack on bad ideas is important

In a recent piece I defended the position that intemperance in debate is defensible. More specifically, I believe that there are some positions that are so silly and/or misinformed that treating them with courtesy amounts to a form of disinformation. As any good agnotologist will tell you, simply being admitted to the debate as a reasonable option is 90% of the battle. So, for example, the smoking lobby became toast once it was widely recognized that the "science" it relied on was not wrong, but laughable and fraudulent. You can fight facts with other facts, but once a position is the subject of late night comedy, it's a goner.

Why should this be so? Omer Preminger sent me this piece in discussing the Dunning-Kruger Effect (DKE), something that I had never heard of before. It is described as follows:

 ….those with limited knowledge in a domain suffer from a dual burden: Not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it.

If the DKE is real, then it is not particularly surprising that some debates are interminable. The only constraint on their longevity will be the energy that the parties bring to the "debate." In other words, holding a silly uninformed position is not something that will become evident to the holder no matter how clearly this is pointed out. Thus, the idea that gentle debate will generally result in consensus, though a nobel ideal, cannot be assumed to be a standard outcome.

The piece goes on to describe a second apparent fact. It seems that when there is "disagreement" people tend to split the difference.

people have an “equality bias” when it comes to competence or expertise, such that even when it’s very clear that one person in a group is more skilled, expert, or competent (and the other less), they are nonetheless inclined to seek out a middle ground in determining how correct different viewpoints are.

I've seen this in action, especially as regards non-experts. I call it the three remark rule. You go to a talk and someone says something that you believe might be controversial. Nobody in the audience objects. The third time there is no objection, you assume that the point made was true. After all, you surmise, were it false someone would have said so. Of course, there are many other reasons not to say anything. Politeness being the highest motive on the list. However, in such cases reticence, even when motivated by the best social motives, have intellectual consequences. Saying nothing, being "nice" can spread false belief. That's why, IMO, it is incumbent on linguists to be very vociferous.

Let me put this another way: reasonable debate among reasonable positions is, well, reasonable. However, there are many views, especially robust in the popular intellectual press, that we know to be garbage. The Everett and Evans stuff on universals is bunk. It is without any intellectual interest once the puns are clarified. Much of the stuff one finds from empiricist minded colleagues (read, psychologists) is similarly bereft of insight or relevance. These positions should not be treated graciously. So, nest time you are at a talk and hear these views expounded, make it clear that you think that they are wrong, or better still miss the point or beg the relevant questions. If you can do this with a little humor, so much the better. Whatever you do, do not rest the views with respect. Do this and we have every reason to believe that this will help the silliness spread.

73 comments:

  1. So, just to be clear, then, the people who don't agree with you about generative grammar aren't just wrong, they're stupid, too?

    Okay, I'm not being charitable here. I know you don't believe this about everyone who disagrees with you.

    ReplyDelete
  2. Let me repeat: I am not interested in why people adhere to views. I am interested in the views themselves. Some views are dumb. They can be dumb for various reasons. They can be deeply misinformed. They can be besides the point. They can be based on deep misinterpretation about what is at issue. Some can even be based on smile equivocations (i.e. puns). People hold such views for various reasons. They need not be dumb to hold them. But whatever the reasons they have for doing so does not change whether the ideas are dumb. IMO, as you may have noticed, I think that we are much too charitable in engaging dumb ideas. We treat them with a respect that they do not deserve. This misleads people into thinking that these ideas are reasonable. That is a big mistake. The art is in going after the ideas without going after the people that hold them. This, I confess, is not easy. But it is doable.

    Now, re generative grammar: It depends on what you mean by agreeing with me. Given that most working within generative grammar don't believe many of the things that I do and given that I think that many things that I don't believe are nonetheless reasonable, then agreeing with me about the details is not a condition for admission into my view of the reasonable. But, if you are asking whether I think that people who believe that the generative enterprise write large is somehow off kilter, i.e. are globally skeptical about the enterprise, then yes I do think that their views are dumb. Just like I think that flat earth views are dumb, climate science denial is dumb, and denying links between smoking and cancer are dumb. And not just a little dumb. Very very dumb. Does that mean that I could not be persuaded otherwise? In principle yes. In practice no. If the enterprise were wrongheaded then it would mean that everything we have learned about G and UG over the last 60 years is wrong. So I consider this likely? Yes, about as likely as discovering that smoking promotes healthy lungs and that the earth climate is actually cooling.

    Let me add that I know that this sounds intolerant and close minded. However, that is my point. It is people who leave their minds open to these possibilities that at this present moment are the people who are retarding inquiry. Look, if you want to go out and build a perpetual (noah? he, he) motion machine, I will not stop you. But if you think that ideas advocating the possibility of one should be taken seriously, then we just disagree. Why, Because right now GIVEN WHAT WE KNOW trying to build one of these is DUMB. Ditto GG denialism. We can debate the details, not whether GG write large is right.

    ReplyDelete
    Replies
    1. "I think he thought that the object of opening the mind is simply opening the mind. Whereas I am incurably convinced that the object of opening the mind, as of opening the mouth, is to shut it again on something solid."

      —G.K. Chesterton

      Delete
    2. Hah, I read this post belatedly just now, and was going to pull this quote out myself. A delight to see someone beat me to it!

      Delete
  3. I'm having a hard time reconciling the emphasis on views with the invocation of the Dunning-Kruger Effect, which is unambiguously concerned with the intellectual limitations of people.

    More generally, I think that Krugman's Option #3, being "nasty, snarky, and loud", can be (very) problematic. Specifically, it can be abused to marginalize intellectual opponents, and it can be used to assert, falsely, that there is little uncertainty in a field. (In fact, I think Krugman's use of the tactic is a pretty good example of both, but I'm not an expert in economics, so take this assertion with a great, big grain of salt as you see fit.)

    To be clear, I'm on board with generative grammar. I'm not a syntactician, so I'm not going to pretend to have strong views about which theory of generative grammar is the best, but I agree with you that it's a problem if someone doesn't take it seriously in general while making claims about, e.g., language acquisition.

    And you're right that giving bad ideas about linguistic knowledge a place in the debate gives them an air of authority they don't deserve. So there's a cost to Option #2, consistent, persistent, and polite explanation of the ideas that (one believes) are good.

    I suppose my point is that the costs of Option #2 have to be weighed against the costs of Option #3. Personally, I prefer Option #2. If good ideas can't win out with persistent, polite explanation, I'll be unhappy, but I'd rather lose taking that approach than win by shouting people down and humiliating them. And, although I think the Dunning-Kruger Effect is very likely a real thing, I don't see the value in asserting that people are dumb for holding views that (I believe) are dumb.

    ReplyDelete
    Replies
    1. The Dunning-Kruger effect is not about intelligence, it is about expertise. We are all guilty of it once in a while because nobody can be an expert on everything: the less you know about a topic, the less likely it is that you realize how little you know. When we complain about politics, bemoan what a crappy job the football coach is doing, or complain about Hollywood being out of ideas, we're simply talking out of our ass without realizing it.

      That doesn't make you dumb; for example, it is not a dumb thing to want politicians to do a better job or Hollywood to be more creative, but the solutions we have in mind are just as bad, if not worse; they are simply infeasible once all factors are taken into account (e.g. a movie industry that has big blockbusters as its only sustainable income --- due to even star-studded smaller budget movies no longer drawing a crowd --- inevitably faces a bigger financial risk that must be mitigated by strict adherence to formula and attention-grabbing franchise tie-ins).

      That said, I'm also wary of option 3, because we already had this as the standard mode of engagement in linguistics, and the consequences can still be felt today. The linguistic wars were full of third option rhetoric, including insults at conferences, and it's one of the reasons that we have vociferous anti-Chomskyans to this day. Similarly, Pullum is a master of option 3, but it hasn't really helped his cause as it often led to him overstating his point for the sake of rhetoric, thus opening himself up for easy counter-attacks. "Formal Linguistics Meets the Boojum" is a paradigm case of that: highly entertaining, deliberately hyperbolic, but thus also easily ignored as a jester's rant despite the very real problem it pointed out.

      Delete
    2. Maybe I misread the post then, but when Norbert describes the Dunning-Kruger effect by writing:

      "In other words, holding a silly uninformed position is not something that will become evident to the holder no matter how clearly this is pointed out."

      This seems to me to be about intelligence, or maybe willful ignorance, but not expertise. If people pushing anti-generative-grammar arguments merely lacked the appropriate expertise, clear (polite, persistent) explanation should inform and mitigate the silliness.

      Delete
    3. @Noah, Thomas

      Though the position I've been pushing may be unstable, I believe that we should try to thread a certain needle; go after bad ideas vigorously, leave all personalities out. Moreover, we should refrain from going after people not because it does or doesn't work (I really don't know how effective this is) but because it is a disgusting thing to do. The problem, of course, is that people do not like having the ideas they hold dear called 'dumb.' But there is a big difference between attacking and ridiculing an idea and belittling the holder of an idea. So, though the DKE is about people my recommendation relates to ideas. Moreover, I don't see going after an idea as aiming to persuade the holders of the ideas that they are wrong, so much as making it clear to onlookers that these bad ideas are really beyond the pale of rational inquiry.

      I should add that I do not think that ridicule is always appropriate. It isn't. I like the movement theory of control but I don't think that other theories are dumb. I like the idea that labels are important in the course of the derivation and not just for interface purposes, but I don't think that the idea that they are mere interface requirements is dumb. Ditto for many many other things. However, there are some ideas, and I named a few in earlier comments, that deserve no respect. Noah and Thomas note the downside of this strategy: it can quickly descend into personal invective and get too widely applied. However, what is less generally noticed, and that I am trying to emphasize, is that the converse is also true. There is a real downside to polite dialogue in that it valorizes BOTH sides of the debate, and IMO it is often the case that there are NOT two sides to some debates. My suggestion for preventing the dark sides of the policy from emerging is to try as best as one can to separate the ideas from the people.

      There is a second down side IMO. During the "linguistic wars" (which, btw, I lived through and contrary to the imagery it was not really that martial and nobody got badly hurt) ideas were clarified. Nowadays, politeness has made it harder to get clear about what's important. The go along to get along mentality has baleful effects, especially when ideas are at stake.

      So, harsh talk? Sure, but not about people. Why? Because it's a very not nice thing to do (as your mother would tell you). What we need to practice is vigorous depersonalized debate. And we really need to discredit some of the dumb ideas out there. They really are influential, and part of this is because linguists don't make a scene whenever they get trotted out. And because we don't do this, we lend these dumb ideas credence. Shame on us.

      Delete
    4. @Norbert: I think the key point is this one:
      And we really need to discredit some of the dumb ideas out there. They really are influential, and part of this is because linguists don't make a scene whenever they get trotted out.
      I certainly agree that linguists are too passive when it comes to starting or engaging in discussions, both within and across disciplinary boundaries. One piece of evidence is that the professional readership of this blog vastly exceeds the number of posters. Another one that you don't see too many co-authored papers in linguistics (experimental work being the obvious exception), let alone co-authored papers where not all authors are linguists. Interdisciplinary workshops are also rare. It seems to me that there are strong isolationist tendencies, and these are part of the reason why we are doing a horrible job at engaging people outside the narrow-knit community of generative grammar. That is certainly something that needs fixing.

      But I think once this kind of shared community is in place, option 3 is no longer necessary. If you are already working with, say, biologists, you can politely point out crucial misconceptions to them and they'll believe you. It's in the best interest of your joint work, after all. And these co-laborators, in turn, are considered more credible critics by biologists --- "outsiders" are by default suspicious, but if somebody from your own community is sternly pointing out the problems and showing how a linguistically informed approach can provide more insightful analyses and lead to more interesting questions, that's when you start to listen. At the end of the day, people want to do interesting work, and nobody will complain if you actively help them with that.

      tldr; nobody likes it when you pee on their parade, but they might be interested to hear how you can help make their parade even more awesome

      Delete
    5. Adding to the above, I don't believe that there is a strong historical precedent for option 3 yielding good results, but plenty of counterexamples (though admittedly I have neither the authority of a historian of science nor the personal experience of somebody who lived through these events).

      In linguistics, we have the tragic case of the Peters & Ritchie (PR) result that every recursively enumerable string language is generated by some transformational grammars. PR took great care to explain the implications of their theorem, in particular what it does not imply for transformational grammar and how the formalism can be weakened to get around the massive overgeneration. But it was quickly abused as ammunition by opponents of transformational grammar, who went with a very strong option 3 approach. The replies to this attack were thus so unwilling to yield even an inch that they declared the whole mathematical approach of PR pointless.

      What we got at the end is that an incredibly insightful result that already puts the fundamentals of Minimalist grammars in place 25 years before their inception was relegated to the trash heap of linguistic history --- slowly rotting away and fading out of the community's memory --- because of its abuse between two warring factions.

      If discussion back then had been more level-headed and less infused with rhetoric, the bystanders might have focused less on verbal prowress and more on what PR had really shown.

      Delete
    6. What we need to practice is vigorous depersonalized debate. And we really need to discredit some of the dumb ideas out there.

      I'm with you here. As I mentioned above, the invocation of the Dunning-Kruger Effect seemed aimed at people, not ideas, but I'm happy to be wrong about that.

      I think Thomas' example of the Peters & Ritchie results is a nice illustration of my worry about Option #3 asserting a false sense of certainty within a field. I don't know the details of that work or the response to it, but I'm also with Thomas in favoring level-headed debate rather than loud snark.

      I should also point out, though, that part of what I enjoy about this blog is the loud snark about ideas Norbert doesn't like. So, while I favor Option #2, I can appreciate at least the entertainment value of Option #3.

      Delete
    7. I'm with Thomas on this. I agree with Norbert that we do need to engage, we do need to speak up forcefully for ideas that we think are important. But theatrics have real - but limited - value. Although criticizing ideas vs. people is an important distinction to draw, humans are not good at making that distinction, whether on the giving or receiving end. Linguistics has a diplomatic track record that is not enviable. When people think that linguists are smart, reasonable people who appreciate the problems that others are trying to solve, and when they recognize that linguists can articulately demonstrate that they know a lot of relevant stuff, then they're more likely to assume that our conclusions are not mere religious fanaticism.

      Delete
    8. I await the success of my reasonable colleagues. I have no doubt that their reasoned patient responses will oneday bear fruit. Btw, i have a bridge for sale if you or Thomas are interested.

      Delete
  4. Cynthia Allen told me a folk-version of the DKE many decades ago: "It's no fun playing Cowboys and Indians with people who are too dumb to know when they're dead."

    ReplyDelete
  5. In addition to Thomas Graf's and Colin Phillips' misgivings about 'warring factions' and 'theatrics', I found Ewan's fortune cookie principle useful (from the thread on Open-mindedness last year):

    When you say "I want polarization" it seems you are mixing up the concomitant confidence with bravado. Obviously it would be crazy to think that linguists have a special monopoly on bravado, but digging in won't make it any better. The Fortune Cookie Principle (so called because I found it once in a fortune cookie) -

    "Strong and bitter words indicate a weak cause"

    Replies will remain in the realm of pointless ideological polarization and out of the realm of useful controversy as long as anyone detects that your cause is weak - because then they need not reply with anything better.


    The point is not even that your cause is necessarily weak, just that 'strong and bitter words' invite the implicature that it is. While some readers here are self-professed fans reading FoL to get a chuckle out of the snarkiness, I'm willing to bet there is a larger contingent of colleagues who are taken aback by the rhetorical style and vote with their feet. You will probably point out that you don't write for those people and that they don't need to read it, which is all true. Yet with the roster of distinguished contributors listed prominently on the right, the implicatures of the fortune cookie principle may sometimes spill over beyond these pages. I wonder how you think about that.

    ReplyDelete
    Replies
    1. @Mark (in two parts)

      So, not only do I discredit my positions with snark but I embarrass my colleagues and friends. Wow. Thx Mom.

      Let me take your point more seriously for a moment. I suspect that you and Colin and Thomas and Ewan do not agree with me that there is some real junk out there. Or if you do agree, you do not think that calling junk "junk" is, even if accurate, effective. All it does is split the community and indicate a weak hand. As you might imagine, I disagree. The community and everyone else ought to split from junk. Pretending that it is not junk hurts the field both internally and externally. And the only way to indicate that something is junk is by calling it "JUNK." Anything short of this will legitimize it and and allow it to flourish. And there is no nice way to call something junk.

      So how much junk is there? Well enough. The Piraha stuff that has gotten so much outside publicity qualifies. Evans' recent book qualifies. The Pines et. al. stuff that I went after a while ago makes the list. This stuff has no redeeming intellectual value and debating its merits is both a waste of time and self defeating. Again, not unlike arguing "nicely" with climate science deniers, flat earthers, pyramid power junkies. Arguing "nicely" means that the other side has a point to make. It might be wrong, but it is a point worth making. Only drawing a line around it and calling it out is, IMO, honest. And I should tell you again that calling something junk cannot be done "nicely."

      Now, when is junk naming apposite? When something is junk, of course. But who gets to decide? Well, sadly, we all do. The Piraha stuff is influential junk and so needs calling out. It is based on a bad pun, and this has been made clear endless number of times and yet it persists. Why? Well, I sincerely doubt it's because of the quality of the argument. Has the snark helped? Actually, I think it has. And the more the field treats this as junk, the harder it will be for the outside world to treat it as serious, which, to repeat, it is not.

      Is this effective? Again, I believe it can be. Here are some great not "nice" papers: Chomsky's review of Skinner (recall, it claimed that Behaviorism was either clearly false or vacuous), Gould and Lewontin on vulgar Darwinism, Fodor and Pylyshyn on connectionism, Lewontin on EVO cognition. These were very not nice papers and they changed the debate by focusing the differences and taking an emperor has no clothes position. Similarly Gould, Block and others on the IQ debates and Krugman and co now on vulgar Macro. Has this vigorous attack on these bad ideas expunged them? Of course not. Who thought that they would. But, it has required response in many cases and it did make clear how weak the consensus views were. IMO, a nice polite solicitous discussion would have been worthless.

      Delete
    2. @Mark part 2

      Calling junk "junk" carries a cost to the caller. I agree with that. But not calling it "junk" has a cost to the field. Being "nice" makes it hard for outsiders (and junk does sell as we've seen in linguistics) to see the fault lines. It even makes it hard for many insiders to see them. Junk degrades and like bad money it pushes out the good.

      Look, I understand that there are political reasons to play nicely with others, even to keep quiet about garbage views. But, please don't pretend that this doesn't have a cost. Don't think that soft words persistently applied works wonders. It doesn't. It often just serves to conceal junk behind a patina of respectability.

      So are vigorous words useful. I think so, but I may be wrong. Is playing nice useful. Maybe but it has a down side. Must judgment be applied in deciding to go harsh. Yes. Must judgment be applied in deciding to be "nice." Sure. It's complicated, isn't it.

      Last point: who do I write for and am I alienating readers. Maybe. I hope not, but it is possible. Does this worry me. Honestly, not really. I will keep the comments sections open for the words of the reasonable. I even invite the reasonable to write reasonable posts making reasonable points in a reasonable way so as not to offend. As for those poor hostages on the masthead, they can separate themselves from FoL with the push of a button and there is nothing that I can do about it.

      Delete
    3. @Norbert: I feel like you're still thinking of this in terms of a matter of principle --- the only way to deal with junk is to publicly denounce it as such --- when for me the question is really which method is most effective. And that is a question we cannot answer in a vacuum, we need to look at previous cases and be careful to take the specifics of linguistics into account. One thing I find interesting about your examples is that they are either very small, confined to a single field, or so big that they involve even the educated public and thus enter the realm of politics. In politics being loud and snarky is probably the only way to get heard in the first place, and the whole process is less about truth than pushing the Overton window, rendering certain positions acceptable or unacceptable. So for evolution VS creationism, mocking rants may indeed be the best strategy.

      Linguistic debates hardly ever enter that realm; the Piranha discussion might come closest, but this is actually a case where the media probably would've been much less interested if there wasn't the seductive narrative of the underdog debunking the elitist east coast intellectual that is constantly shitting on God's country in his anarchist pamphlets. And instead of long debates about whether Piranha has recursion or not, including heated rhetoric that feeds into the narrative of the underdog, there should have been a one paragraph remark by a group of linguists who are not strongly aligned with Chomsky but still well-respected --- e.g. Keenan, Joshi, and so on --- about why this is completely irrelevant for UG. That would have ended the issue quickly: "look, even the non-Chomskyans say this is irrelevant"

      On the other end, I don't think linguistics has a lot of heated debates at this point that stay within the field (in contrast to the linguistics wars 40 years ago). Nowadays the fault lines are along disciplinary boundaries, with linguists largely at odds with psychologists, neuroscientists, biologists, and computer scientists. That's just an intrinsic property of language: it is a highly interdisciplinary subject, and different fields have different traditions, goals, and priorities.

      Crucially, though. these are all fields that have 1) more practitioners, 2) more funding, 3) more clout with the public, 4) stronger administrative support, 5) a better publishing infrastructure, 6) a greater pressure to produce papers and bring in grant money, and 7) consequently very little to gain from listening to linguists, at least on a practical level. But they all still have the curiosity of a scientist, so by default we should assume that they are open to new ideas.

      But coming in guns blazing has the real risk of killing that curiosity and turning unawareness or polite disinterest into open antagonism. Linguistics as a field is not well equipped on an institutional level to fight this out. Chomsky had the luxury in the 50s of having nothing to lose except his own career, but if linguists become known as a cranky bunch of zealots (even if that is just a misrepresentation by those whose ideas we would be criticizing), that hurts the entire field. It will make it much harder to find collaborators, much harder for psycholinguists to be hired in psychology departments (which is already very unlikely, unfortunately), much harder for linguistic considerations to be taken seriously by the computational folk, and so on.

      At the end of the day, it comes down to this: would a linguistics department hire, say, a computational person if it appeared that he/she/xe did nothing but point out the problems with linguistic theories and never produce a positive result? Considering your own attitude towards Alex C's work, the answer is arguably no. So why should non-linguists care about linguists if they're just kill-joys complaining from the peanut gallery?

      Delete
    4. Upon rereading my own post (oh the vanity!), I feel one clarification is needed: I didn't mean to imply that Alex C has no positive results and that his work is irrelevant (quite the opposite). But you have been very adamant that as long as he does not produce any results that fit your criteria of what constitutes an interesting linguistic problem, you don't really care about his qualms with the standard UG story. Just like a psychologist won't care about your problems with connectionism, Bayesian methods, empiricism etc if you have nothing productive to offer that they might be interested in.

      Delete
    5. Truly amusing to read. Just a comment from the cheap seats [concerning Norbert's post a few up]:

      Norbert writes: "Here are some great not "nice" papers: Chomsky's review of Skinner (recall, it claimed that Behaviorism was either clearly false or vacuous),"

      In my entirely uneducated opinion Chomsky's review of Skinner was not so effective because it was nasty but because Chomsky had something better than Skinner to offer. So the best way of dealing with views that Norbert considers junk would be offering views that are clearly better. But that is the problem with Norbert's never-ending outpourings of nastiness - they can't can hide the fact that the Chomskyan 'program' has run out of steam decades ago. Anyone who was in doubt about that will by now wonder why someone who allegedly has a solid position himself would bring out the big guns [form a 'panel of experts!] to attack a book like "The Language Myth"...

      Delete
    6. @Thomas

      This will be my last word on this. I really don't see what you are driving at. There are many results. So the fecundity of the field is not at issue. We even have results in related areas (processing, acquisition, and even some neuro). This stuff is politely ignored, by and large. What is not ignored is work that is truly bad. I have cited some instances thereof. And the work that gets coverage all starts with the claim that GG is dead or that Chomsky's program has proven to be wrong or…Nor is this just a matter of some giant killing of interest, as you suggest. It comes regularly and Everett is just the last in a long line of such articles. Nor does "refuting" this stuff calmly really help all that much. I've tried this. persistently, as have others and you are wrong to think that this works. Of course, I won't stand in your very reasonable way should you wish to make nice here, but I don't intend to hold my breath.

      As for being ignored, this I think is incorrect. Surprsingly, Everett, Evans, connectionists, Bayesians DO feel like they need to respond. And they do. Of course, they also ignore you if they can, so the point should be to make this hard to ignore. It takes time, but it does work. I did some of this when AI was sweeping the world in a long article in Cognition with Elan Dresher. It had a small effect. Let's just say that my experience does not jibe with yours wrt to the effectiveness of vigorous attack. This does not mean to suggest that you are not more experienced in these matters and so maybe you are right. But it does not fit well with mine.

      So, let me end with one repeated point: this is not a call to stop doing positive work. We are doing this in spades and should continue. What it is a call for is to call junk "junk." There is a gentility in current academic debate that lets bad work flourish. It's considered rude to say that there is no there there. There is a presupposition that being confrontational must backfire. Well given the relatively crappy situation we find ourselves in now (not content wise, politically) I don't see that keeping a low profile has been a huge hit. So let's say we disagree.

      The good news is, that people like me will give people like you a present. You do the reasonable criticism (as well as the positive work, but remember to do some of the criticism too) and you can position yourself as the reasonable critic as I will have your far left flank. This should give you more room to move. So good luck.

      Delete
    7. @Thomas: I think your diagnosis is right: the media loves the Everett Piraha case because it's a classic underdog gone rogue narrative, and "strong and bitter words" only add to the effect. This is a case where we also have other forms of dissent though. Besides the article in Language by Nevins et al., there are several examples of well-respected linguists not aligned with Chomsky weighing in against Everett's case — check out Levinson's reply in Current Anthropology (pdf) or Enfield's highly critical review of Everett's book in JRAI (pdf). E.g. Enfield writes: "It is not original enough to be a stimulant for research. It is not investigative enough or fair enough to the literature to be a news report for the thinking public. It is not hard-hitting or specific enough to require or even allow that opponents seriously engage with the arguments."

      @Norbert, nobody here is saying (as far as I can see) that harsh criticism is impossible or that we should be nice about obvious garbage. There is no unnecessary gentilty in the Nevins et al. piece in Language nor in Enfields words above. As I said on the earlier thread, scientists should have bullshit detectors aimed not at the level of schools or scholars, but at the level of ideas.

      You say that "having the left flank" of the reasonable critic is an important position to fill. This gets us back to the divisive rhetoric of "sides" and in fact invites me to picture a spectrum with NH and DE (Everett) at the far extremes, flanking constructive critics like Adger, Yang, Newmeyer and Haspelmath, Enfield or Kirby left and right. Where can we make real progress? Surely in the middle as long as the extremes are too far away (and too busy being snarky) to talk to each other.

      I will admit that my view of scientific discourse is much rosier and more optimistic than yours. I believe in overcoming evil with good. I'm with Thomas Graf, Ewan, and others in doubting the productivity of "vigorous attack" and seeing room for a constructive and critical stance. This probably means I'm not your intended audience. Still, I'll check in every once in a while to keep a finger on the pulse of east coast generative linguistics (I'll resist a pun about blood pressure here).

      Delete
    8. "Where can we make real progress? Surely in the middle as long as the extremes are too far away [...] to talk to each other."

      I want to point out that this assumes that someone like, e.g., Adger can further our understanding of the human capacity for language by talking to someone like, e.g., Haspelmath and, more to the point, that they cannot do so ("make real progress") without such talking. I should say, I'm fairly new to the field, but both of those assertions (can make real progress by talking, cannot make real progress without talking) strike me as far from obvious,

      And, whether these assertions are true or not, the automatic assumption that they are (i.e., that progress/truth awaits "in the middle") is what the original post was about, if I understood correctly.

      (While I am the one who forwarded Norbert this link, I have no privileged insight into what he intended, necessarily.)

      Delete
    9. @Norbert: We obviously won't reach a consensus, but I think it is particularly unfortunate that you didn't address my remark about Alex C's work and its reception among linguists. Because my main point isn't that criticism should be polite, it's that criticism irrespective of its presentation is not as effective as collaboration. And the interaction of mainstream linguistics and computational linguistics clearly shows this.

      Computational linguists have plenty of gripes with linguistics --- from concerns of implementation and the unfounded claims about computational efficiency to the terminologically confused recursion debate as well as, obviously, the PoS --- and they have been very vocal about them at various points, and it has changed diddly squad about how linguists think about these issues. There have also been very aggressive attacks on Minimalism, e.g. by Lappin, Levine and Johnson, and constant complaints about the lack of rigor --- both were brushed aside and I'd be surprised if the majority of the linguistic community remembers anything about these cases except that "those outsiders just don't get linguistics".

      Others like the Categorial Grammar community have chosen the path of detached co-existence. They don't criticize and they don't get criticized, and there is zero exchange of information (modulo Steedman's work, which incorporates insights from all kinds of frameworks and is also read by tons of people). That's not due to a shortage of great ideas --- CG's flexible constituency, for instance, has a lot going for it and points to a very deep problem with the standard global interpretation of constituency tests as probing the one and only structure. But no communication means no exchange of ideas and hence no common ground.

      Now the important thing is that you won't find any of these people in a mainstream linguistics department in the US. The aggressive critics are in "non-Chomskyan departments", in computer science, or in specialized research centers, while CG is restricted to a few departments in Europe. They have little chance to be hired by a mainstream department because the work they do is considered uninteresting at best, deeply misguided at worst. But of course tons of linguistics departments have hired computational linguists. What's so special about them?

      Delete
    10. Well, they take linguistic ideas seriously, actively incorporate them into their work, and can articulate the implications of their findings for linguistic research. The work ranges from computational phonology in the style of Tesar, Heinz, or Hayes&Wilson to TAG and Minimalist grammars. If Smolensky, Joshi or Stabler had gone full frontal assault on linguists, the world would look a lot different nowadays (the very small world of linguistics, that is). But they didn't, they took the parts of linguistics that they found most insightful, merged them with a computational perspective, and created something new and interesting that linguists can appreciate.

      So now we live in a world where certain types of computational work can land you a job in a linguistics department, and that means that students are exposed to completely new ways of thinking about language, and sooner or later they will, all by themselves, notice the problems that the vociferous critics pointed out unsuccessfully many years ago. And with time, this will become the new mainstream view.

      These lessons --- which my younger, much more arrogant self had a hard time learning --- should also be taken to heart by linguists when they talk to other scientists. If you want generative grammar to play a prominent role outside linguistics, explaining to people that their assumptions are flawed and all of this has been proven wrong in paper XYZ is not going to work. Many won't read the paper, those who do will struggle with the technical machinery, and the select few who can handle all of that might still not see the relevance of the work to their own research. Results don't matter if the audience can't appreciate them.

      It is much more fruitful --- and fun, imho --- to take from their work what can be salvaged and turn it into something new and cool that everybody finds intriguing, something that couldn't have been done without the generative input. That gets you street-cred, which leads to jobs, which leads to a new generation of scientists that are more GG-savvy. It's a lot more time-consuming than criticism --- and I'm speaking from experience here --- but in the long run it gets you much better results.

      Some of this is already happening, in particular in psycholinguistics and neurolinguistics, and I am not a good judge of how much crosstalk there is between fields in these areas. If there's none, then we should try to find out why this is the case and how it can be fixed. Because, once more, I'm not dogmatic about this, I'll take whatever approach works best, but my personal experience is that even a small degree of collaboration beats massive opposition.

      Delete
    11. @Thomas

      I did not address your Alex C remark. But I deny that this work is ignored. It is read and people conclude that it does not bear on what they are interested in. This work is NOT junk, it is simply directed at questions at right angles to those that concern many linguists. Sometimes the work resonates and people (e.g. Hunter, Dyer) use the methods to advance questions of more direct interest to linguists. Heinz has a nice review of how these methods bear on questions of interest to me, for example, and I read his stuff with interest and am always informed. Ditto the critique of early minimalism by Lappin et.al. Everyone read it and it actually had an impact, especially as concerned the complexity of early minimalist methods. So, I guess I deny the premiss of your point.

      As for working together and the other apple pie issues that see so near and dear to your heart. I have nothing against collaboration. Indeed, I would go so far as to say that here at UMD we practice what you preach and I have been on board with this from day one (indeed, before day one). However, despite lots of heroic efforts to interest outsiders in this way of doing things, the returns on effort have been, IMO, slim. Rather than convert the heathen, it has proven better and more productive to co-opt the relevant tools and use them in house. Part of this, however, has been constant criticism of these tools as understood by those we stole them from (psycho types). The same is true in the computational domain if Berwick's efforts are any indication. It is hard to get people to use the computational tools to address issues people like me care about. When, however, this is done (e.g. Berwick, Yang, Hunter, Frank) we all sit up and take notice. What we don't really care for is being told that we have found nothing of interest and that results we take as well founded are too imprecise to mean anything. In short, we reject the criticism, and as you may have noticed in FoL are more than happy to debate the point.

      So please be eclectic. I wouldn't stop you if I could. But my experience has been far different from yours and drawing sharp lines that clearly distinguish intellectual positions has proven, IMO, to be very useful. And, of course this is the main point, drawing these lines means making judgments some of which are not nice.

      Delete
  6. @omer So I'm not sure there's likely to be an Adger and Haspelmath coming any time soon, but there have been Adger and Smith (sociolinguist), Cheshire (sociolinguist) and Adger, Adger and Trousdale (construction grammarian!), Culbertson (psycholinguist/learning bias person) and Adger, and a bunch more. None of these people are qua representative of their disciplines, interested in minimalist syntax and they vary dramatically in how interested in/signed up to the notion of UG they are. But my feeling is that I learned a huge amount from working with them, and, slightly contra Norbert, at least the sociolinguistics/syntax work has actually had some positive impact on sociolinguistics in general, to the extent that, somewhat weirdly, I'm an invited speaker at the next NWAVE and I think it's true to say that there's been an upsurge in interest among (some) sociolinguiss about generative grammar (not, by any means, just my doing! - lots of people involved besides of course). There is still a rearguard who think that there's something heretical about combining (as I do in some of that work) feature checking dependencies and frequency data - or lexical item specifications and networks of friendship groups by ethnicity (indeed, at an early presentation of this, someone whispered to a colleague of mine that my talk with Jen Smith was "dangerous"), but I think that view is becoming a bit old fashioned now.

    Is this about progress being `in the middle' as Omer said? No, I don't think so. There was a potentially interesting way of seeing connections between the systems of grammar and the systems of use. But where this really connects to the issue under discussion is that I had to use oodles of charm to get to the point of sociolinguists trusting that I really was interested and that I didn't think what I did was intrinsically better and that what they did was `junk'. There was definitely a sociological barrier that took work to climb over, and, I think, impeded progress. And some of that is definitely a result of snippy discussion and bad blood over the years.

    That's not to say that I don't agree with Norbert about drawing sharp intellectual lines, clearly stating negative judgments, etc., especially when people misrepresent work or ideas - which is pretty common. [That Enfield article Mark linked to states that generative linguistics has 'yielded just one finding' (recursion), which immediately set my blood boiling because it so misrepresents even what it quotes from, and we should challenge that kind of thing when it appears]. But I disagree with Norbert about cordiality, which I try to maintain (not always successfully), precisely because I don't want to create new sociological barriers down the line and because I do think that engagement, while it's a lot of work, can be successful with (at least some) people and can be intellectually rewarding.

    ReplyDelete
    Replies
    1. Thanks David. These are exactly the kind of things that make me feel my optimism is not without grounds.

      @Omer, I did not say, of course, that truth was to be found in the middle — my position is not the rosy relativist straw man you make it out to be. The rather simple fact is that truth will not not miraculously emerge from snarkiness and divisive rhetoric; it will only emerge from scientists listening to each other and collaborating to mutual benefit. (Truth is too lofty anyway; I'm happy with good research questions and useful, cumulative answers.)

      Delete
    2. @David

      Norbert said little about cordiality. What he said was that junk needs to called "junk." What Norbert is somewhat skeptical about is that this can be done "nicely." The nod towards civility was to suggest that ideas, not people, be called out. So when someone says, for example, that "generative linguistics has 'yielded just one finding' (recursion)" this be deemed a badly misinformed deeply ignorant pice of junk. It's not enough to say, "well that's your opinion and I have mine," nor "what about…" It needs a firmer response for anything short of this will be taken as indicating just another in house disagreement in the field. In fact, thx for making this point, for it goes some way in showing that Mark is not quite right. The problem is not merely to give the Piraha stuff bad reviews but to give it the right bad reviews. The point is not that Everett's stuff is wrong, but that EVEN WERE IT RIGHT IT WOULD BE BESIDES THE POINT (it also seems to be wrong, but IMO this is a secondary fact).

      Now here's a homework problem: take something that you think logically misses the boat and express this fact in a way that will not be taken as very harsh criticism, in fact snarky and nasty. But what if this is indeed the problem? Then not making this point, making another "nicer" one will not only mislead, but will fail to engage the issues.

      FOr some reason, the discussion turned into a debate about collaboration with others and civility. If only this were the problem. It isn't. There are people out there who think that GG has shown nothing in 60 years and that its methods are irredeemably hopeless and wrongheaded. The arguments defending these views are very very bad. I know because I've reviewed a bunch of them. Showing that they are very very bad will not be taken well, believe me again, I know this. Maybe David A would like to recount his interactions with Evans, or David P with Everett. There are very few ways of saying that someone's argument has missed the boat entirely without being taken to be rude and nasty. And from what I can see, the desire NOT to be so retards saying what is the case, that some of the most popular ideas out there, the ones that garner lots of attention, are really bad. So you choose: shut up and get along or be ready to be called snarky. I prefer door #2.

      Delete
  7. briefly cos on phone: completely agree we need to combat misunderstandings and misrepresentation. But there's a difference between how what you write is taken (which we can't control, really) and how it's intended. Thinking of the Evans incident, I think our joint piece on this and my solo one were intended to be correctives of mistakes in Evans work, really not aimed at Evans as a reader, but at others (so the homework is already done). I think they were taken differently by different people. What I learned from that situation was that I'm never going to convince Everett, Behme or Evans that generative grammar is a reasonable and very interesting research programme which has had sufficient success to make it worthwhile pursuing further (a sad realisation, as I usually think reasoned argument will win the day), but that's fine. I can convince others who are more interested in fact than rhetoric that what Everett, Behme and Evans write about generative grammar is erroneous and can't be taken seriously. And convincing others is best done dispassionately, I think.

    It actually feels to me that a lot of this `generative grammar is dead' stuff is kind of a mantra expressing a hope and indicating a tribal association. It's certainly false. GG is pretty thriving intellectually.

    But on top of that, I do think the sociological barrier issue I raised earlier is important. We generativists are sometimes perceived as arrogant and dismissive but there's nothing inherent about the intellectual position that should lead to that, and I think we have a huge amount of offer other fields. But they have to trust that we are serious in engaging with their interests.

    ok, longer than I planned. Hope it posts from my iPhone!

    ReplyDelete
    Replies
    1. David, since your effort is all about making your position understood to the other tribe, maybe you can help me out with something that baffles me. In your solo paper on Vyv's work you write:

      "Evans’ Aeon article is titled `Why language is not an instinct'. One might be tempted to conclude from this that generative linguistics has proposed that language is an instinct. But that would be wrong: linguists don't use the word 'instinct' as a scientific term. The phrase is a metaphor that Steven Pinker came up with for his popular science book `The Language Instinct' (Pinker 1994). Linguists talk rather of an innate capacity triggered by, and partly shaped by, experience." [p.1]

      I would have concluded that we all ought to avoid the misleading term 'language instinct'. If that is the case, can you please explain: why did you not object to the journalist in "New Scientist" using that very term but instead proudly tweeted on 31/03/14:

      "My PNAS paper with Jenny Culbertson is out! Born to chat: Humans may have innate language instinct - New Scientist: newscientist.com/article/dn2533..."
      https://twitter.com/davidadger/status/450869051670011905

      Delete
    2. They came up with that byline without asking Jenny or myself and I did object, to no avail (or even response!). Very annoying. I don't think that was at all a good title, as the focus of our paper was on how grammatical knowledge is stored once acquired (though we did point out that adopting a universal scopal order made the more general typological pattern more understandable) and really it was about the idea that the way we store grammatical information needs to involve structure in addition to, or instead of transitional probabilities. The tweet just created the title out of the link directly I think - but it couldn't be got rid of in the title of the New Scientist piece anyway. It was interesting interacting with the media in that way, as what you actually want to say gets bulldozered over in the desire to have a saleable headline at a timescale that suits the media outlet. I did an interview with Australian radio about the piece, and that was much better, as there was the time, just about, to nuance the findings.

      Delete
    3. Thanks for this. I understand that one has little control about what journalists say. But if I were as concerned as you claim to be about the misleading-potential of the term 'language instinct', I probably would have tweeted using the title if the PNAS paper vs. linking to the New Scientist article.

      And then there remains the non-trivial issue that many of your colleagues actually use the term 'language instinct' in their scholarly work. I learned some even claim there is a 'second language instinct'. Surely you cannot be unaware of this. So I am wondering if David P. had your solo piece in mind when he wrote: "We are talking about a plague of work that also violates the most minimal standards of factual accuracy"?

      Delete
    4. I look forward to reading your next tweet about your next PNAS paper covered in the New Scientist.

      Just Google Scholared "language instinct" and there's very little use of this term - I found 2 on the first 10 pages (Musso et al's paper in Nature and Bonnie Schwartz' paper on 2nd Language Acquisition). Nothing in any of the major linguistics journals (LI, NLLT, Syntax, Lingua, Journal of Linguistics, Language, Phonology, etc). I think my use of the generic bare plural 'linguists' is consistent with this. As you'll know from the basic semantics of generics, they admit of exceptions to general rules rather than being unrestricted universal quantifiers. Birds fly, but Tweety, the famous penguin, doesn't. Your own phrase `many of your colleagues' does seem to lead to a falsehood though, as it's not true that many of my colleagues use this term in their scholarly work. Indeed, few of them do. So I'm wondering if perhaps David had some of your blog posts in mind when he wrote that phrase?

      Delete
    5. Terrific, so your statement was a Chomskyan Universal [as opposed to a Greenberg Universal]? When Galilean scientist David A. utters sentence [1]

      [1] linguists don't use the word 'instinct' as a scientific term.

      He does not mean to suggest
      [2] no linguist uses the word 'instinct' as a scientific term.
      but rather
      [3] linguists don't use the word 'instinct' as a scientific term, except for those who do.

      And since [3] cannot be falsified by empirical evidence no matter what linguists say, the Adgerian Postulate is a genuine Chomskyan Universal.

      Just one more thing: Given that you subscribe to [3], exactly why was it wrong of Vyv to use the word 'instinct'?

      Delete
    6. You are joking, right? In case not, I wasn't making a scientific pronouncement, I was making a generic statement about how people in a certain class (linguists) use a particular term. If I say `actors don't use Macbeth to refer to the Scottish play' that's a general claim about the class of actors, not made false by the occasional actor using `Macbeth' to refer to the Scottish play. But you know all this from basic philosophy of language courses.

      Delete
    7. Dear Christina,

      I wonder if I could ask you a rather personal question. What do you take out of all this? I mean, you obviously came to the conclusion that syntax work in the minimalist tradition is a heap of meaningless gibberish and has been so for decades. But then why devoting so much of your energy, not only here but also in your publications, to the criticisms of Chomsky's vague pronouncements in informal interviews (rather than of his detailed hypotheses and conclusions in his scientific publications), or of what David Adger writes on his twitter feed, instead of producing original work in the intellectual tradition of your choice?

      What is your explanation for the fact that third person pronouns cannot be bound in a simple transitive sentence in English but can be in Frisian? or of omnivorous agreement in Georgian? or of the distribution of linkers in Kinande? or if you don't care about these questions, about whatever else you are interested in? Aren't these questions about actual linguistics facts of actual languages more interesting than what David Adger tweets or what Chomsky thinks about nematodes?

      Delete
    8. @David: you are right, I was not entirely serious about the scientific claim. However, my question was serious: if you are fine with some linguists talking about a language instinct, then why is it such a big deal that one particular linguist (Vyv) used that term? If you meant your claim about what linguists do not say as non-committal as you now say, your 'moral outrage' about Vyv's use of the term seems pretty odd...

      @Oliver: thank you for your interest. I answered your question on this blog more than once - nothing has changed. Also, thank you for the suggestions, but I leave the kind of work you suggest to people who actually have training in linguistics. Now allow me to ask you a couple of questions as well. Just what makes you think that I "obviously came to the conclusion that syntax work in the minimalist tradition is a heap of meaningless gibberish and has been so for decades"? David A. was so kind to explain that his talk about 'linguists' does not entail him talking about all of them. So why would you believe me talking about specific work of specific linguists [without ever mentioning the phrase 'meaningless gibberish'] has the implication you draw?

      Delete
    9. Oh, my apologies, I have obviously got the wrong impression. So there are works in minimalist syntax that you find worthwhile, then?

      To answer your precise question, I believed you thought all works in minimalist syntax to be worthless because I thought you were convinced that Chomsky's contribution to linguistics post say 1980 were worthless, and I came to this conclusion because you quote apparently approvingly Postal's and Lappin's et al extremely severe critique thereof. Now almost by definition, the core hypotheses of a work in minimalist syntax will be some variation or extrapolation of those of Chomsky (narrow syntax reduced to a bare minimum, binary branching structures, successive cyclic movement...).

      But it seems I was wrong, and again please accept my apologies. Out of curiosity, what are examples of good works in minimalist syntax, in your opinion?

      Delete
    10. Thank you for the elaboration, Oliver. I am not sure I fully understand what you mean by "works in minimalist syntax" considering that by 1980 we barely had GB, let alone minimalism, and that the two are so different that for example Peter Culicover thought the minimalist perspective "explicitly rules out precisely the major theoretical achievements of the past. All of them.” (Culicover, 1999:138). So based on what do you suggest what Chomsky was doing in 1980 qualifies as minimalist syntax?

      Also, you seem to think that because I criticize some work by Chomsky, I must reject ALL work that is done in a Chomskyan framework. This is not the case. Given that I have not read most of the work done by minimalist syntacticians, I am in no position to judge wether it is solid work or 'worthless'. Since David Adger used the term 'tribal association', maybe this is a feeling shared by generativists: if you're 'against one you're against all'? It certainly is not my attitude. As philosopher I have no 'stake' in one theory being right and another wrong. My interest is in how people employ argumentative strategies [and Norbert certainly offers a wonderful opportunity to study that] and how standards of intellectual honesty are upheld [or violated] during argumentation. I am especially interested in instances of internal inconsistency and double standards.

      So here are a couple recent observations: Norbert claims one should attack ideas that are junk but not people proposing the ideas [a few up in this tread]. Yet he also claims that Dan Everett maintains his claims about recursion for personal gain [fame in the media if i recall it correctly]. That seems to be an attack on Everett's character not his ideas. David A. criticized Evans for using the word 'instinct' because this is misleading and because linguists do not use this word. Yet, when it is pointed out to him that other linguists use the word he does not admit he made a mistake but invokes semantic implication [which of course would apply to Evans as well]. I could go on, but then you can read yourself and, as long as you're not guided by tribal alliances, you'll find numerous similar examples of internal inconsistency and double standards on Norbert's blog alone...

      Delete
    11. "So based on what do you suggest what Chomsky was doing in 1980 qualifies as minimalist syntax?" Mostly on his own account of what he was trying (but at the time failing) to do (more on this below).

      "Also, you seem to think that because I criticize some work by Chomsky, I must reject ALL work that is done in a Chomskyan framework." Well, in your own How scientific is biolinguistic science you report apparently approvingly of harsh criticism of the minimalist program (that it "fails to satisfy basic scientific criteria", that "[its] ontological foundation of [...] is incoherent") and you write yourself that "convincing refutations" of these critics are "elusive." So I am taking you find the epistemological stance of the MP to be at best unconvincing, most likely very fragile and at worse incoherent (do correct me if I am wrong). Taking this into consideration, I concluded that you think that works done in the MP framework is unconvincing at best, incoherent at worse (I admit that this might be a professional deformation; in my line of trade we don't often expect good work to come out of weak or incoherent foundations). But your clarification is helpful: you do not, in fact, reject all work done in a minimalist framework, at least as a matter of principle, and your interest is in argumentative strategies.

      With that in mind, may I suggest you (re)read E.Reuland (NLLT,18)? I would believe you have already but it is missing from your bibliography of answers to criticisms of the MP in How scientific..., though oddly you cite (Reuland,NLLT 19) which was a reply to a reply to this text (the same is incidentally true of your quotation of (Roberts,NLLT19) which should be (Roberts,NLLT 18)). There, among other things, you will see that at least some prominent figures in minimalist syntax (presumably most, in fact, and of course including Chomsky himself) do, in fact, perceive (rightly or wrongly) a strong intellectual continuity between GB and MP. In other words, they strongly disagree with Cullicover's quotation or with Lappin, Levine and Johnson proposition that "The ease and speed with which GB theorists have discarded [their theoretical framework] and embraced the bizarrely vague and unmotivated assumptions of the MP thus suggest that in large sections of the field theoretical commitment has little to do with evidence or argument." See the aforementioned article of Reuland for detailed empirical arguments on the binding of reflexives in Icelandic or the reply of Holmberg (same issue) for detailed explanations of how his study of mainland Scandinavian languages led to his choice of an MP framework.

      Now I remark that as a philosopher studying argumentative strategies and "especially interested in instances of [...] double standards", you quote neutrally (or even arguably, approvingly) Cullicover and LLJ in your article, and certainly seemed to endorse Cullicover's quotation in your argument above, but you quoted neither Reuland's nor Holmberg's nor Roberts's reply (all rich with empirical arguments) and the replies you did quote you write were "spirited" and unconvincing (how so?). Why is that?

      More seriously, how are you going to determine whether Cullicover is right that MP is an utter break from GB, or whether Reuland, Holmberg or Roberts are right, and there is in fact a strong continuity without looking past argumentative strategies (I'll note in passing that Cullicover offers virtually no argument in defense of his assertion, which is not surprising as it is from a review article, whereas, again Reuland, Holmberg and Roberts offer a detailed account of how and why they thought their previous GB works could profitably be recast in an MP framework and what new insights this yielded) without looking at the actual scientific content of these theories?

      One last trivial point. Because I have noticed you cared deeply about accuracy, I'll note that my name is not Oliver.

      Delete
    12. First, my apologies that auto-correct keeps changing your name. Second, thank you for pointing out the error in the Roberts quote, much appreciated. Third, i am a bit busy at the moment and will have to answer most of your questions later.

      For now just a comment regarding the (dis)continuity in Chomsky's pre-and post minimalism work. I took Chomsky at his word when in MP wrote "The end result [of the Minimalist Program] is a substantially different conception of the mechanisms of language” (Chomsky, 1995, 219). I looked also at what those very close to Chomsky say about his work after he proposed minimalism. I trust that you agree that Neil Smith and Robert Fiengo are both competent and close enough in spirit to Chomsky that what they say should not be dismissed.

      So here we go:

      "[Chomsky] has overturned and replaced his own established systems with startling frequency” (Smith, 1999, 1)

      "Let us remember we are talking about someone who tries to reinvent the field every time he sits down to write.” (Fiengo, 2006, 471).

      Now to me using phrases like 'overturned and replaced' or 'reinvent the field' does to sound like celebrating continuity...

      Delete
    13. "The end result [of the Minimalist Program] is a substantially different conception of the mechanisms of language”. Crucially, this does not imply that the assumed mechanisms are different, only that they are conceptualized in different ways. Just like one can study regular languages via finite-state automata, monadic second-order logic, projections of strictly 2-local string languages, uniformly right-branching CFGs, and so on.

      Chomsky does indeed change the specifics of his proposal with pretty much every paper --- I don't think he has ever written a series of papers that share exactly the same proposal and apply it to different phenomena. But the basic ideas are pretty much the same, in particular from a computational perspective in the sense of Marr that abstracts away from algorithmic elements like features. Transformational Grammar and MGs are very similar, and they have more in common with each other than, say, TAGs or GPSG.

      I'm not quite sure where I would put GB in this spectrum. Jim Roger's formalization puts it closer to GPSG, but it also removes the whole derivational component represented by Move Alpha. That's a mathematically sound solution, but it obviously provides no insight on how GB's Move Alpha compares to the operations we see in ((R)E)ST and Minimalism.

      Be that as it may, a lot of the seemingly contradictory claims about continuity VS constant flux stem from the fact that the machinery has changed a lot on the algorithmic level, but much less so on a computational one.

      Delete
    14. "what they say should not be dismissed. " Sure, they should not be dismissed. But neither should Reuland, Holmberg, Roberts or our own current host, Norbert Hornstein, when they argue for continuity. So there are (at least) two camps: people arguing that the MP is a radical break, perhaps even a revolutionary one, and people arguing that there is an unbroken epistemological thread from Syntactic Structures to On Phases. Which camp is right is probably an ill-defined problem seeing how subjective the central question is (though I certainly think that relevant evidence do exist and I, for one, have certainly reached a conclusion), but if one cares about is argumentative strategies, then one should realize and recognize that there are actual, informed disagreements about this point.

      Delete
    15. Could you guys (Olivier and Thomas) maybe give an example of a computational property that has been preserved over time? I can see some similarities between any given pair in the alphabet soup of alternatives -- MGs, MCFGs, TAG, CCG, ACG, GPSG, HPSG, etc etc -- but nothing that particular distinguishes the sequence ST -- REST -- GB -- MG. I kind of feel that MG has a stronger resemblance to the categorical tradition in some respects.

      Delete

    16. I think an enlightening way of viewing the step GB -> MG is in terms of derivation trees/DAGs. What follows is of course not how things are usually presented, but I think it is an interesting rational reconstruction.
      In GB, all merge steps came before any move steps. (the level DS was just from this perspective the maximal derivation subtree not containing any move steps) Thus, move had two reentrant arcs, one to the source and one to the target positions of movement.
      The step to minimalism consisted solely in observing that all the grammatically relevant information involving move was contained in the reentrant arcs; it didn't matter when movement took place, only whence and whither. (Late operations then are walking this idea back.) This meant that we could treat move as having just one reentrant arc, to the source of movement. This meant of course that merge steps and move steps were now interleaved, which made the level of DS disappear.

      In other words, I see what was preserved from GB -> MP as basic analyses (but now with different terminology), and rough grammatical architecture.

      Delete
    17. @Greg: That is an interesting perspective on GB, but there's also another axis, along which GB is to Minimalism like OT is to SPE: You start out with a completely unrestricted operation (Move Alpha or the Generator) and then narrow down its range via a cascade of constraints. It is the unrestrictedness of Move Alpha that makes me feel that GB isn't as close as, say, TAG to ST and Minimalism. Of course the constraints can be shifted directly into Move and then we are more or less in the same ballpark again.

      Delete
    18. @Alex: How close formalisms are related really depends on what properties one looks at, but here's my take. Thanks to Abstract Categorial Grammar and the 2-step approach we know that most linguistic formalisms can be understood as a tree language (D-structure, derivation trees) that is mapped to the set of output trees (S-structure).

      [
      Side remark: This holds even for unrestricted formalisms like HPSG, which apparently can be translated into TAGs the way it is used by most practitioners. But I don't know the HPSG literature well enough to say whether the translated fragments are indeed sufficient for linguists.
      ]

      For ST, EST, GB, Minimalism, GPSG, and TAG, we also know that the underlying tree language is regular, and that the mapping from underlying to surface trees is of linear size increase. Peters & Ritchie already pointed out the importance of this restriction for keeping the power of transformational grammar in check.

      So what distinguishes the Chomskyan formalisms from GPSG? Well, GPSG's mapping is just the identity function, but on the other hand the definition of the underlying tree language is more involved (Jim Rogers showed how GPSG's meta-rules correspond to closure properties on the set of formulas that define a grammar).

      The differences to TAG become apparent once one looks at the translation from TAGs to movement-generalized MGs. First, TAG-style Adjunction corresponds to displacement of subtrees that may be neither heads nor maximal phrases, which to the best of my knowledge has never been entertained in the Chomskyan tradition. Second, Adjunction is not an instance of what one may call ``uniform movement'', as each instance of Adjunction is upward movement followed by lowering movement.

      In the transformational tradition, lowering movement has been entertained at various points, but it was mutually exclusive with upward movement: either a phrase moves upwards, or it moves downwards, it cannot do both (at least not in syntax proper). This has changed only recently with the arrival of sidewards movement, which is really just a way of bundling upwards and downwards movement into one package. So MGs with sideward movement are a lot more similar to TAGs, but they still respect the ban against X'-movement (which is vacuous from a formal perspective, but does some work if you do not avail yourself of any tricks to turn X'-constituents into maximal phrases).

      As for the connection to CGs, you're right that the MG feature mechanism is closely modeled after slashed categories, but I do not consider that an important aspect of MGs. And many of the niftier aspects of slashed categories (e.g.\ recursive adjunction to a VP adjuct via type (VP/VP)/(VP/VP) ) need to be emulated via empty categories in MGs.

      What more, you can get rid of all features in MGs, and then it is a lot harder to state what the parallel to CG should be, since two of CG's main linguistic tenants are not part of MGs: flexible constituency, and a strong linke between syntactic and semantic types. MGs can do flexible constituency, but it is not commonly used. As for MG semantics, the way I understand Greg's and Tim Hunter's work in that area it seems that they allow for a greater divergence between syntax and semantics exactly because the former is not supposed to freely alter constituency in order to accomodate the latter.

      Bottom line: transformational formalisms share some macro-parameters that set them apart from the competition, and most of the technical changes we have seen boil down to i) unifying the mapping from underlying trees to surface trees, and ii) moving the workload around (constraint on uderlying trees VS surface tress VS transformations), and iii) varying the degree of lexicalization (features VS principles).

      Delete
    19. @Greg

      The elimination of DS is not quite as complete as you suggest, at least in the standard versions of the theory. True there is no complete segregation of DS operations prior to all movement. But there is a residue of this in that some operations (first merge to a theta position) precedes any other further I-merges of that element. The MTC tries to eliminate this distinction in allowing movement into theta positions. However, so far as I know every theory assumes that a DP begins its derivational life by merging in a theta position. Thus there seems to be a when per DP; first theta then whatever.

      It is also worth noting that what you are discussing is MG, not minimalism "in the wild." Chomsky, at least, believes that the reduction of movement and structure building to species of merge is an important innovation of Minimalism. From what I can tell, this unification has proven to be difficult to formalize in MG (I may be wrong here, but structure building and "movement" remain formally distinct operations in the stuff I've seen).

      As you know there are flavors of Minimalism that have also argued for unifying movement with construal, something else that GB did not do. So too with distinguishing case and theta heads (i.e. A head cannot theta mark and case "mark" the same expression). At least within the syntax community these are thought of as important differences between GB and minimalist theory. Their impact on formalism, however, may not be as clear.

      So, at least in the standard vision, DS has not entirely disappeared, what has gone is the idea, as Chomsky put it, that SATISFY (all theta roles discharged before any movement) is thrown out. This leaves a residue of earlier DS theory however.

      Delete
    20. Chomsky, at least, believes that the reduction of movement and structure building to species of merge is an important innovation of Minimalism. From what I can tell, this unification has proven to be difficult to formalize in MG
      There are three properties of Move that separate it from Merge in the standard definition of MGs:

      1) it involves different features,
      2) it is a unary operation,
      3) it must obey the Shortest Move Constraint.

      All of them are superficial.

      1) The distinction between Merge features and Move features can be dropped without changing anything about the licensed structures, it only affects how you have to write your lexicon to get a specific set of well-formed derivations.

      2) That Move is unary is just a matter of succinctness and simplicity, you can always make it binary like Merge, it just means that derivations are no longer trees but multi-dominance trees.

      3) We could define an analogue of the Shortest Move Constraint for Merge, and it would always be satisfied due to how Merge works.

      But all of this does not change the fact that an MG with Move is invariably more complicated than one that just uses Merge. Both the derivation trees and the mappings are more complex, both weak and strong generative capacity increase a lot, you need a very different type of memory to verify well-formedness, and parsing complexity increases; all as a result of Move introducing fairly intricate long-distance dependencies. And that basic fact is not due to some peculiarity of MGs, it also arises in the Collins & Stabler formalization, where you have to keep track of copies.

      Unification is a nice thing, but it's just as important to pay attention to the differences. Personally, I find it much more interesting that the increase in derivational complexity brought about by Move also occurs with unbounded recursive adjunction (adjunction to an adjunct of an adjunct of an...), which is something Chomsky has very little to say about (the pair merge story always struck me as uninspired). Even though adjunction is defined very differently, a cognitive architecture that can compute MGs with Move can also compute MGs with unbounded recursive adjunction, which I believe is a nice argument that adjunction comes for free. But that requires accepting first that Move is more complex than Merge.

      Delete
    21. What a lovely discussion. I hope that Alex C. can extract the answer to his question because if it is in the deep structure of these comments I surely missed it; e.g. what does Norbert's elaboration on degrees of disappearance of DS have to do with continuity of one computational property during the the ST -- REST -- GB -- MG sequence? It IS of course curious that back in 1995 Chomsky celebrated the elimination of DS as a massive intellectual achievement of MP but that, apparently, in 2015 DS still has not been completely eliminated. In case Norbert wanted to draw our attention to this curious fact - THANK YOU.

      Now in this spirit of wandering wherever the flow of conversation might take us, maybe someone could elaborate exactly how one gets flexible constituency in MGs? AFAIK, MG work somehow mimics flexible constituency by movement+deletion operations. But that seems to involve a lot of arbitrary and unnecessary operations, so maybe instead of more alphabet soup [to borrow Alex's phrase] of abbreviations, you can explain how it works and why it is preferable? Lets just take a concrete example:

      [1] John offered Mary, and Robin gave Terry, a couple of tickets to the Budapest String Quartet performance.

      As familiar, one gets flexibility of constituency in CG by hypothetical reasoning. So 'Robin gave Terry' corresponds to a real constituent, with its own compositional semantics. So one is combining two constituents which are looking for an NP ('a couple of tickets to the Budapest String Quartet performance'). Nothing moves, nothing is deleted. That seems more elegant to me than all those additional operations required by MG - but I am no expert so maybe you can show us the detailed analysis of [1] and compare it to the CG analysis?

      Delete
    22. Just a short addition to the discussion about elimination of d-structure. There was also the empirical argument from 'tough'-movement constructions that it was not possible to have all theta-assignment preceding all movement. In other words, sometimes theta-assigners are created by movement. In combination with the assumption that theta roles are assigned at first merge (the "residue" that Norbert mentioned), this forces us to allow move and merge steps to be interspersed.

      I'm not sure exactly how this relates to Greg's conception of the shift to MP: "observing that all the grammatically relevant information involving move was contained in the reentrant arcs" sounds like it makes it a purely formal, rather than empirical shift ... is that right?

      Delete
    23. @Christina: The analysis is pretty much the same, and is implemented in two steps. First you add new lexical items to the lexicon that allow you to Merge Terry and Robin with gave, without the need for a third argument. Said third argument is directly merged in its target position. That gets you the flexible constituency, but also overgenerates because now the grammar generates Robin gave Terry as well (which strictly speaking is syntactically well-formed and only semantically odd, but that's beside the point here).

      In order to get rid of overgeneration, you define a regular tree language that contains only the derivations with non-flexible MG constituencies and the desired flexible CG constituencies. Via a specific algorithm this regular tree language is directly precompiled into the grammar as a filter on derivations, thus giving you a grammar with flexible CG constituency.

      An alternative route is to lift incomplete constituents to complete ones by merging empty heads in empty argument slots and limiting the distribution of these empty argument fillers. Not much of a difference, but it's more transparent, doesn't require as many lexical items, and highlights the parallel between hypothetical reasoning in CGs and movement in MGs. Greg has some insightful observations on this in his treatment of the A-A' distinction, although he doesn't make an explicit connection to CG, I think.

      All these derivations look awfully similar, and they're also very close to those with across-the-board movement, or sideward movement of the third argument, and so on. They're all just ways of satisfying an argument position by a phrase that is realized somewhere else.

      What the first two strategies I mentioned have in common is that they replace movement of the third argument by Merge, and as such they will fail whenever remnant movement is involved. So if there are linguistic constructions where CCG posits flexible constituency and a sequence of combinatioral operations that correspond to remnant movement, we also have to integrate multiple Move steps with the flexible constituency in some fashion. That can be done, but since I don't know of any such cases I can't tell you what exactly one would have to take care of.

      Delete
    24. @Christina: My previous comment got eaten by a lovely 503 service disruption, so here's the abridged version.

      There's many different ways to get flexible constituency, some involve Move, some Merge, but they all follow the same strategy for the example you gave: discharge the third argument position in some way, and then insert the third argument at the target position. Discharging the third argument could take the form of adding a new entry for gave that only needs two arguments; or merging an empty head as the third argument; or replacing Merger of that argument by an instance of sideward movement of the third argument from some other position into the argument position. Insertion at the target position is either achieved directly via base merger there, or movement into that position. All those accounts look pretty much the same from a derivational perspective in that they involve a dependency between the target site and the two argument positions. They only differ in how this dependency is encoded.

      Since you mentioned hypothetical reasoning, let me add that hypothetical reasoning can be added to MGs without an increase in strong generative capacity. Essentially, it's just a type of covert downward movement.

      Delete
    25. Well, it looks like the first comment made it after all, but I'll keep the shortened version around anyways since it does a better job at bringing out the core idea that drives all the different implementations.

      Delete
    26. @Tim: Greg's remark is purely formal. He's saying that a GB derivation consisted of two parts, a top part that's exclusively Move steps, and a bottom part that's exclusively lexical items and Merge steps. Each Move node is related to two positions in the Merge half, the mover and the target of Movement. The step from GB to Minimalism, then, amounts to throwing away the top part and adding an arc from the mover to the target, which provides all the information you need to know.

      Note that this has absolutely nothing to say about whether movers must start out in theta positions. If you have an MG derivation where they don't, you can translate it book into a GB derivation where they don't, and the other way round. The only thing that falls out immediatley is that a mover cannot move before it has been merged, so you have an implicit "merge before you move" restriction for each moving LI. The same is not true for the LI at the traget site:with lowering movement, for example, you can have a Move step that is implicitly satisfied by a Mover that still needs to be merged. That's why lowering is very similar to hypothetical reasoning.

      Delete
    27. I have no problem with your description of Greg's point. What I was reacting to was the idea that MP amounted to "throwing away the top part and adding an arc from the mover to the target." This may be some of what Minimalism amounted to but it does not exhaust it, nor in my opinion does it really get to most of the interesting technology. Maybe this is what made MGs interesting for some, but it is not what grabbed the attention of many syntacticians. Now maybe we should not have been impressed with the copy theory, Extension, various feature checking algorithms. But as they were what we used to derive she of the observed universals, this is what grabbed many people's attention. So, for example, it was the link between theta roles and Phrase structure that got me interested in the possibility of control as movement. So, my reaction to Greg's point was a small one: what he highlighted, the elimination of DS, was perhaps the most interesting formal feature of the move from GB, but it was not what many MPers would point to and the reason was that the elimination of DS was less complete than advertised.

      Delete
    28. @CB: You mentioned in the past that you lacked linguistic training, and at the beginning nobody objected to your activity here. There were even lots of reading recommendations for you, if I remember well. Yet you kept hammering at everything that moved in this blog, and obviously without having read (or understood) the technical details of linguistic theory which the people in this blog have been involved in, some even for several decades, and with some even shaping the technical developments.

      Even recent hints that instead of "reviewing" books and spending your time on social media playing the "critic" you should actually write and publish original research went right by you. What you have "contributed" to this blog is a lot of anti just for the sake of being anti, very few original thoughts, and certainly nothing well founded. I think the majority of the responses show that I'm not alone feeling this way.

      However, your last posting is just taking the p*** out of every reader's last bit of patience:

      "e.g. what does Norbert's elaboration on degrees of disappearance of DS have to do with continuity of one computational property during the the ST -- REST -- GB -- MG sequence? It IS of course curious that back in 1995 Chomsky celebrated the elimination of DS as a massive intellectual achievement of MP but that, apparently, in 2015 DS still has not been completely eliminated. In case Norbert wanted to draw our attention to this curious fact"

      I will NOT do you the favor and take your complete ignorance apart here. You may get some more serious responses by people who may think that you actually read up on technical details, but before you now engage on actual technical components of syntactic theory, about which you know nothing and so far obviously haven't read or understood anything, I suggest you take a gracious exit so that we can actually discuss issues that WE care about. You can then post your contrary views on Facebook and leave us alone.

      Delete
    29. Why, thank you, Kleanthes, for sharing your emotional responses so openly. Presumably, you are able to empathize with people on 'the other side' who are similarly affected by Norbert's ongoing put-downs as you are by my sarcasm [you did realize I was being sarcastic, I hope]. As for my alleged ignorance of the technical literature: rest assured I am a lot more familiar with the technical literature of your side than you seem to be with the technical literature of 'the other side' [judging by the citations in your papers]. So your paternalism is a tad out of place...

      @Thomas: thank you for going through the effort of typing your answer twice - much appreciated.

      Delete
    30. @Norbert: I agree with you that the revisionist picture I sketched would not have engendered the enthusiasm of the actual introduction of the MP.
      My point was merely that, with the benefit of hindsight, the shift from GB to MP can be imho viewed as a very simple formal tweak (with important consequences, formal and otherwise). The rhetorical point of my statement was to provide a formal basis for the feeling of continuity between GB and MP which was being asked about above.
      My main point was that, in response to Alex's question about why GB-MP is felt to be `closer' than GB-TAG/HPSG/LFG/etc, the formal tweak which derived MP from GB meant that much of the analytical style and methodological assumptions of the latter could be recast without much more than notational changes into the former. (Sometimes notational changes are important.) I don't think that there is much computational similarity between the two, just as I don't think that there is much computational similarity between ((R)E)ST and GB.

      @Christina: I think you are right that `flexible constituency' effects are a real selling point of categorial-type grammars; they've got the best analysis, hands down. Of course, there are alternative analyses in other frameworks, across-the-board movement is an obvious choice in transformational-style theories, but in comparison to CG this may seem something like an usine a gaz. Transformational theories have imho some of the most elegant accounts of GF changing constructions, of wh-constructions, elliptical constructions, and many others.

      I do not think that your metric of simplicity is appropriate, however. (TL/DR: you need to consider the ability of the framework to handle all constructions, not just a single one) It is well known that classical CG, and the Lambek Calculus, are too weak to describe the patterns of natural language. Extensions thereof, like CCG multi-modal CG, lambek-grishin calculus, displacement calculus, hybrid type-logical CG, etc, are strictly more complicated than vanilla CG. Some of these are more complicated than MG (the formal version of MP), and some (CCG) are less complicated than MG. Ideally, we'd have a single formalism which gave the most elegant analyses possible to everything (the best of all worlds). I doubt that this is possible, given the well-known results in the computer science literature about the succinctness gains of more powerful descriptive apparatuses over less powerful ones. As long as we are aiming for a restrictive theory of grammatical description, we will most likely be forced to trade off descriptive elegance in one construction for descriptive elegance in others.

      Delete
    31. @Greg: Just out of curiosity (and ignorance) – how would the Hypothetical Reasoning approach deal with the ungrammaticality of (i)?

      (i) * John convinced three, and Mary convinced the, men that they should leave.

      Delete
    32. @Omer: There is no general `hypothetical reasoning approach'; there are a huge slew of grammatical frameworks which make use of a hypothetical reasoning operation. You are right that in the lambek calculus, with the standard lexical type assignments (Det := NP/N, convince := (NP\S)/S/NP), the above sentence is generable. I don't know how practitioners of other such frameworks would respond to the relative unacceptability of your example.

      I think that the general point you are raising is a right one though: there is a constant interplay between theory development and its testing against data. What advantage one sort of approach might seem to have today in empirical domain X might disappear tomorrow once more becomes known about X. This is one reason why I am happy to see lots of people working in different traditions.

      Delete
    33. @Omer,

      I can only think that you want to specify, lexically, that ‘the’ has to form a constituent with a following noun (does this cause problems elsewhere?), but ‘three’ doesn't—so then ‘convinced three’ is a possible constituent but ‘convinced the’ isn't. Whether you do this in terms of a unary modality, or in terms of the difference between a non-associative vs associative merge mode, seems to be largely a matter of taste.

      the:= []-1np/n
      three:= np/n (or (s/(np\s))/n or whatever)

      or

      the:= np /i n
      three:= np /j n

      This takes us beyond the Lambek calculus, but that's pretty much inevitable given what Greg has been saying: the Lambek calculus both undergenerates and overgenerates.

      Delete
    34. This comment has been removed by the author.

      Delete
    35. Greg: "Ideally, we'd have a single formalism which gave the most elegant analyses possible to everything (the best of all worlds). I doubt that this is possible, given the well-known results in the computer science literature about the succinctness gains of more powerful descriptive apparatuses over less powerful ones. ..."

      But, generative grammar includes (or maybe is even based on) the idea that there are facts of the matter about what kinds of generalizations are possible in language (a conceptual necessity for timely learning, with various empirical observations relevant to what kinds of generalizations we don't need to provide for (e.g. moving clitics to in front of the last word of a clause)), and also I think a further claim that we can produce an optimal notation for those generalizations, with messy things expllicable in terms of historical or cultural factors. E.g. the massive near-duplications in English vocab as a consequence of the Norman Conquest, or the inhibitions on (classic, 'recursive symbol' (Bach 1964/74)) recursion in Piraha as a consequence of a strong cultural inhibition against fancy linguistic performances. We haven't found this notation yet, and maybe it doesn't exist (but there has been progress, such as the revisions to how passive constructions work), but the project is not to find a notation that works for everything, only that works for languages naturally learnable by humans.

      Delete
    36. @Avery: I'm not quite sure what exactly you mean by notation (are we just talking about things like, say, the choice between features and constraints, or does even CCG vs TAG fall under that?). But irrespective of that, I think Greg was actually pointing out two distinct but closely related issues:

      1) it is fairly easy to design a formalism that is "better than reality" if you only have to account for a subset of the data. For example, a formalism that only handles local processes is simpler than one that also handles non-local ones. In this case it is easy to see that the former is too weak because we know that non-local processes exist, but there's many other properties of language we do not know, many so abstract that we do not even realize that we do not know them. Our picture of the empirical facts is always incomplete, and that means that the "wrong" formalism can win out against the "right" one. So claims like "formalism X has a much more elegant analysis" come with the major caveat that this might be possible only because the formalism completely fails for another construction that nobody's looked at yet.

      2) Even if we had a full picture of all the facts, there is no guarantee that there is a unique solution. Formalism A may be better for phenomon X while formalism B is superior for phenomenon Y. So which one do you pick in this case? You might want to bring in psycholinguistic or neurological evidence, but then you need a linking hypothesis, and there you've also got several to choose from. Suppose you have a formula a*m + b*n = 1, where a and b are indicators for the relative importance of m and n. Then you can't give me a unique solution unless the weights a and b are fixed, but we have no way of fixing them.

      These problems arise exactly because generative grammar puts a strong emphasis on notation and succinctness (a trade-off between grammar complexity and structural complexity, with the weights of each factor mostly determined by personal taste). A physicist just needs a translation between different theories so that they can pick whatever is easier to use for a given problem. That simply doesn't make any sense for most generative grammarians because the grammar as specified is taken to be a real object of human cognition. In combination with the common assumption that everybody uses the same grammar (rather than my brain running MGs and yours LFGs), this means that there can be only one true grammar formalism, which is the main non-utilitarian motivation for succinctness criteria.

      Delete
    37. @Thomas:
      Notation is, for me (and, I believe, Chomsky 1957, 1965 and numerous other writings from the early period at least, and also at least some contemporary theories such as LFG) exactly what you write down as a grammar to produce a given language. It has usually consisted formally (when it exists at all, as in the XLE-LFG system or the SPE Phonological Rule formalization) of a combination of compiler and interpreter so that the compiler converts what the linguist writes ('S -> NP, VP', etc. for LFG (comma meaning free ordering of NP and VP) into something the interpreter uses to parse sentences or produce them on the basis of some kind of semantic input. So, on this account, it would be in principle possible for two theories to have very different innards for their interpreter (one with movement as classic derivational movement, another with no movement but some kind of reentrancy scheme) but to be the same theory because due to how other things worked, they defined exactly the same correspondence between grammars and languages.

      re 1: I thought that Greg was reminding us that there is in fact no notation, interpreted as a compression scheme for the data, that is optimal for *all* situations. This is I believe a math fact, tho the details of the proof are beyond me. But it seems plausible. The difficulties of coming up with one notation for all natural language grammars is however a different issue. And of course working out optimally for complete grammars of all languages is a very tough problem (!!!). As the discussion here of CG goes ... if CCG had worked out and attractive analyses of case in Icelandic and Kayardild, and word order etc. in Modern Greek, I would surely become a practitioner ...

      re 2: The nonuniqueness problem is unlikely to go away any time soon, but some ideas can nevertheless be dismissed as producing worse results for a very wide range of phenomena, I think. For example the idea discussed at the beginning of Pesetsky 2013 (Russian Case: Morphology and the Syntactic Categories) of dispensing with morphosyntactic features such as case in favor of direct reference to the actual morphemes used to construct the case forms. Constructing something that functions at a minimal level for all somewhat decently described linguistic phenomena has become an extremely exacting task, and there are not likely to be many serious candidates in the ring at the same time. LFG has a fair number of implemented wide coverage grammars ATM, Minimalism a lot of ideas, many of which are extremely interesting and often seem to explain things that LFG doesn't handle very well (the concentric nature of nominal modification, for example). So the comparisons that people want to make are often of unlike things, and putting the pieces together is hard.

      I view the 'succinctness' criterion as being an attempt to say something about what is recognized as a generalization by the LAD, so that the intuitions about elegance etc. should be taken as hunches about what will work out best in the longer term. It is a notable fact that linguists often talk about predictions, etc., but actually very rarely make concrete claims, let alone demonstrate them, so the program is really not very far advanced in those terms, although the amount of insightful description that has been produced increases relentlessly, it seems to me, & I it will come together at some point. Meanwhile CB and DE point out that in many respects, this hasn't happened yet, which is not a problem by my lights, and disagreements about what is 'best' are surely inevitable.

      Delete
    38. Saying a bit more about this, one might regard the role of generative grammar as the construction of a special purpose programming language (most like a logic programming language) for writing grammar implementations, where the empirical, explanatory aspect of the project is to try to make it as true as possible that the grammars you write on the basis of relatively small naturalistic samples of the language will work out for larger samples, especially those containing complex and therefore rare structure.

      Furthermore, most of the actual discussion is about what the basic principles ought to be rather than the actual syntax of the finished language, and the Chomskian camp criticizes some of the others (eg the LFG community) for making too many premature decisions about the latter. This a perfectly rational criticism, regardless of whether any particular person chooses to be moved by it or not. That the Chomskian proposals are too obscure, internally contradictory etc. is also a rational criticism, which sensible people can choose to put aside.

      Delete
  8. Thanks for all of this interesting discussion, all. I think I feel like each of you sometimes depending on mood/time of day/number of cups of coffee.

    But seriously, it seems clearly right that bizarre, stupid, and/or counter-productive scientific ideas should be called out as such. The problem is, how do you identify them? Consider Linguist A, who privately thinks that it is a bizarre, stupid, counter-productive idea to pay attention to phenomenon X (say, usage data) or to use tool Y (say, probabilistic models) when trying to build an explanatory theory of language. On the other hand, Linguist B privately thinks that it is a bizarre, stupid, counter-productive idea *not* to pay attention to phenomenon X and use tool Y when trying to build an explanatory theory of language.

    Suppose they get it in their heads to discuss these matters, each trying to convince the other. Will anyone succeed? I suppose we can all agree that what *ought* to happen is this: the one who is *actually* right should win. The problem, though, is that we can't identify in advance which one is actually right. Since we don't have this knowledge, how should we recommend that they conduct the discussion? Since they're both strongly convinced of their positions, Norbert's recommendation, it appears to me, is that they conduct the discussion by (inter alia) ridiculing each other's positions. I doubt that this is likely to produce much in the way of scientific progress, though it is certainly likely to make people angry. Human nature being what it is, it's probably also likely to make them try to resort to extra-scientific mechanisms (e.g., influence over grants and hiring) to try to thwart inquiry into the other's position. This seems like a net loss for science: it converts what should have been a discussion of the scientific merits of the two positions into a power game.

    ReplyDelete
    Replies
    1. Ridicule is worthy of the ridiculous. What's ridiculous? Well that is a judgment call, but the kinds of things you raise do not seem to me to reach that bar. I have tried to provide some examples of the contemporary ridiculous:e.g. the constant confusion between Greenebrg vs Chomsky Universals,the view that GG has discovered nothing of significance on 60 years of work, the idea that standard methods of data collection are so flawed that all work based on this is worthless. There are others, but these three suffice. Are all disagreements like this? Of course not. But some are, and they have been long lived and influential, especially now. Treating these as if they were reasonable is, IMO, both unreasonable and a disservice both to the field and outsiders looking in. You note that there is a risk that calling the dumb "dumb" might encourage some to start calling things that are not dumb "dumb." Sure. And you seem to suggest that so that this does not happen we should never call anything that is dumb "dumb." I am not a fan of slippery slope arguments in general. Why? Because they side step the obvious: that not all methods of argument are appropriate for all occasions. Just as there is no scientific method (one size fits all) neither is there just one style of argument. And just as in the SM case, judgment is called for. I know, this can be abused. But what can't be? And recall, pretending that dumb things aren't also has a cost.

      Delete
    2. This comment has been removed by the author.

      Delete
    3. We are not talking about instances of honorable disagreement over high-level scientific hunches and interests. We are talking about a plague of work that also violates the most minimal standards of factual accuracy and logical thinking at every level of discussion — low-level as well as high-level. This is a point that has surfaced in some of Norbert's columns, but I think has not been sufficiently stressed. The work that's getting tagged here with the j-word doesn't merely fail to appreciate, say, the subtle logic of Poverty of the Stimulus arguments, but also screws up simple facts about the languages it mentions, misrepresents the literature, fails to support claims with argument, and worse — yet gets published in a high-profile (usually field-external) venue and blurbed by the press. I don't think there's a slippery slope to worry about. The line seems quite clear, and if we care about the future of our field, we can't afford to pretend otherwise.

      That said, it's not clear how best to fight the plague, nor how to restore the health of the field in its wake.

      The real cure is education. The public does not have to know or care about the details of the latest research, but they should at least know that words and sentences have structure, know the difference between letters and sounds (how many times have undergraduate intro students told me "Chinese is not a phonetic language"?), that language acquisition is an intricate puzzle, and that there are smart people called linguists who study this stuff for a living. They don't, which is why they are easy prey for linguistic nutsiness. Sadly, "public" in this case includes not only the average person on the street but also our colleagues in other fields. And we're not likely to see linguistics in every high school (where it truly belongs) any time soon.

      Which brings us back to the discussion in progress on this blog, but hopefully with the notion of a slippery slope put to rest. I can be sympathetic to Dan's worries, but I think they are the least of our problems at the moment.

      Delete