Comments

Thursday, July 20, 2017

Is linguistics a science?

I have a confession to make: I read (and even monetarily support) Aeon. I know that they publish junk (e.g. Evans has dumped junk on its pages twice), but I think the idea of trying to popularize the recondite for the neophyte is a worthwhile endeavor, even if it occasionally goes awry. I mention this because Aeon has done it again. The editors clearly understand the value (measured in eyeballs) of a discussion of Chomsky. And I was expecting the worst, another Evans like or Everett like or Wolfe like effort. In other words I was looking forward to extreme irritation. To my delight, I was disappointed. The piece (by Arika Okrent here) got many things right. That said, it is not a good discussion and will leave many more confused and misinformed than they should be. In what follows I will try to outline my personal listing of pros and cons. I hope to be brief, but I might fail.

The title of Okrent’s piece is the title of this post. The question at issue is whether Chomskyan linguistics is scientific. Other brands get mentioned in passing, but the piece Is linguistics a science? (ILAS), is clearly about the Chomsky view of GG (CGG). The subtitle sets (part of) the tone:

Much of linguistic theory is so abstract and dependent on theoretical apparatus that it might be impossible to explain

ILAS goes into how CGG is “so abstract” and raises the possibility that this level of abstraction “might” (hmm, weasel word warning!) make it incomprehensible to the non-initiated, but it sadly fails to explain how this distinguishes CGG from virtually any other inquiry of substance. And by this I mean not merely other “sciences” but even biblical criticism, anthropology, cliometrics, economics etc.  Any domain that is intensively studied will create technical, theoretical and verbal barriers to entry by the unprepared. One of the jobs of popularization is to allow non-experts to see through this surface dazzle to the core ideas and results. Much as I admire the progress that CGG has made over the last 60 years, I really doubt that its abstractions are that hard to understand if patiently explained. I speak from experience here. I do this regularly, and it’s really not that hard. So, contrary to ILAS, I am quite sure that CGG can be explained to the interested layperson and the vapor of obscurity that this whiff of ineffability spritzes into the discussion is a major disservice. (Preview of things to come: in my next post I will try (again) to lay out the basic logic of the CGG program in a way accessible (I hope) to a Sci Am reader).

Actually, many parts of ILAS are much worse than this and will not help in the important task of educating the non-professional. Here are some not so random examples of what I mean: ILAS claims that CGG is a “challenge to the scientific method itself” (2), suggests that it is “unfalsifiable” Popper-wise (2), that it eschews “predictions” (3), that it exploits a kind of data that is “unusual for a science” (5), suggests that it is fundamentally unempirical in that “Universal grammar is not a hypothesis to be tested, but a foundational assumption” (6), bemoans that many CGG claims are “maddeningly circular or at the very least extremely confusing” (6), complains that CGG “grew ever more technically complex,” with ever more “levels and stipulations,” and ever more “theoretical machinery” (7), asserts that MP, CGG’s latest theoretical turn confuses “even linguists” (including Okrent!) (7), may be more philosophy than science (7), moots the possibility that “a major part of it is unfalsifiable” and “elusive” and “so abstract and dependent on theoretical apparatus that it might be impossible to explain” (7), moots that possibility that CGG is post truth in that there is nothing (not much?) “at stake in determining which way of looking at things is the right one” (8), and ends with a parallel between Christian faith and CGG which are described as “not designed for falsification” (9). These claims, spread as they are throughout ILAS, leave the impression that CGG is some kind of weird semi mystical view (part philosophy, part religion, part science), which is justifiably confusing to the amateur and professional alike. Don’t get me wrong: ILAS can appreciate why some might find this obscure hunt for the unempirical abstract worth pursuing, but the “impulse” is clearly more Aquarian (as in age of) than scientific. Here’s ILAS (8):

I must admit, there have been times when, upon going through some highly technical, abstract analysis of why some surface phenomena in two very different languages can be captured by a single structural principle, I get a fuzzy, shimmering glimpse in my peripheral vision of a deeper truth about language. Really, it’s not even a glimpse, but a ghost of a leading edge of something that might come into view but could just as easily not be there at all. I feel it, but I feel no impulse to pursue it. I can understand, though, why there are people who do feel that impulse.

Did I say “semi mystical,” change that to pure Saint Teresa of Avila. So there is a lot to dislike here.[1]

That said, ILAS also makes some decent points and in this it rises way above the shoddiness of Evans, Everett and Wolfe. It correctly notes that science is “a messy business” and relies on abstraction to civilize its inquiries (1), it notes that “the human capacity for language,” not “the nature of language,” is the focus of CGG inquiry (5), it notes the CGG focus on linguistic creativity and the G knowledge it implicates (4), it observes the importance of negative data (“intentional violations and bad examples”) to plumbing the structure of the human capacity (5), it endorses a ling vs lang distinction within linguistics (“There are many linguists who look at language use in the real world … without making any commitment to whether or not the descriptions are part of an innate universal grammar”) (6), it distinguishes Chomsky’s conception of UG from a Greenberg version (sans naming the distinction in this way)  and notes that the term ‘universal grammar’ can be confusing to many (6):

The phrase ‘universal grammar’ gives the impression that it’s going to be a list of features common to all languages, statements such as ‘all languages have nouns’ or ‘all languages mark verbs for tense’. But there are very few features shared by all known languages, possibly none. The word ‘universal’ is misleading here too. It seems like it should mean ‘found in all languages’ but in this case it means something like ‘found in all humans’ (because otherwise they would not be able to learn language as they do.)

And it also notes the virtues of abstraction (7).

Despite these virtues (and I really like that above explanation of ‘universal grammar’), ILAS largely obfuscates the issues at hand and gravely misrepresents CGG. There are several problems.

First, as noted, a central trope of ILAS is that CGG represents a “challenge to the scientific method itself” (2). In fact one problem ILAS sees with discussions of the Everett/Chomsky “debate” (yes, scare quotes) is that it obscures this more fundamental fact. How is it a challenge? Well, it is un-Popperian in that it insulates its core tenets (universal grammar) from falsifiability (3).

There are two big problems with this description. First, so far as I can see, there is nothing that ILAS says about CGG that could not be said about the uncontroversial sciences (e.g. physics). They too are not Popper falsifiable, as has been noted in the philo of science literature for well over 50 years now. Nobody who has looked at the Scientific Method thinks that falsifiability accurately describes scientific practice.[2] In fact, few think that either Falsificationism or the idea that science has a method are coherent positions. Lakatos has made this point endlessly, Feyerabend more amusingly. And so has virtually every other philosopher of science (Laudan, Cartwright, Hacking to name three more). Adopting the Chomsky maxim that if a methodological dictum fails to apply to physics then it is not reasonable to hold linguistics to its standard, we can conclude that ILAS’s observation that certain CGG tenets are falsifiable (even if this is so) is not a problem peculiar to CGG. ILAS’s suggestion that it is is thus unfortunate.

Second, as Lakatos in particular has noted (but Quine also made his reputation on this, stealing the Duhem thesis), central cores of scientific programs are never easily directly empirically testable. Many linking hypotheses are required which can usually be adjusted to fend off recalcitrant data.  This is no less true in physics than in linguistics.  So, having cores that are very hard to test directly is not unique to CGG. 

Lastly, being hard to test and being unempirical are not quite the same thing. Here’s what I mean. Take the claim that humans have a species specific dedicated capacity to acquire natural languages. This claim rests on trivial observations (e.g. we humans learn French, dogs (smart as they are) don’t!). That this involves Gs in some way is trivially attested by the fact of linguistic creativity (the capacity to use and understand novel sentences). That it is a species capacity is obvious to any parent of any child. These are empirical truisms and so well grounded in fact that disputing their accuracy is silly. The question is not (and never has been) whether humans have these capacities, but what the fine structure of these capacities is.  In this sense, CGG is not a theory, anymore than MP is. It is a project resting on trivially true facts. Of course, any specification of the capacity commits empirical and theoretical hostages and linguists have developed methods and arguments and data to test them. But we don’t “test” whether FL/UG exists because it is trivially obvious that it does. Of course, humans are built for language like ants are built to dead reckon or birds are built to fly or fish to swim.  So the problem is not that this assumption is insulated from test and thus holding it is unempirical and unscientific. Rather this assumption is not tested for the same reason that we don’t test the proposition that the Atlantic Ocean exists. You’d be foolish to waste your time.  So, CGG is a project, as Chomsky is noted as saying, and the project has been successful as it has delivered various theories concerning how the truism could be true, and these are tested every day, in exactly the kinds of ways that other sciences test their claims. So, contrary to ILAS, there is nothing novel in linguistic methodology. Period. The questions being asked are (somewhat) novel, but the methods of investigation are pure white bread.[3] That ILAS suggests otherwise is both incorrect and a deep disservice.

Another central feature of ILAS is the idea that CGG has been getting progressively more abstract, removed from facts, technical, and stipulative. This is a version of the common theme that CGG is always changing and getting more abstruse. Is ILAS pining for the simple days of LSLT and Syntactic Structures? Has Okrent read these (I actually doubt it given that nobody under a certain age looks at these anymore). At any rate, again, in this regard CGG is not different from any other program of inquiry. Yes, complexity flourishes for the simple reason that more complex issues are addressed. That’s what happens when there is progress. However, ILAS suggests that contemporary complexity contrasts with the simplicity of an earlier golden age, and this is incorrect. Again, let me explain.

One of the hallmarks of successful inquiry is that it builds on insights that came before. This is especially true in the sciences where later work (e.g. Einstein) builds on early work (e.g. Newton). A mark of this is that newer theories are expected to cover (more or less) the same territory as previous ones. One way of doing this for newbies to have the oldsters as limit cases (e.g. you get Newton from Einstein when speed of light is on the low side). This is what makes scientific inquiry progressive (shoulders and giants and all that). Well linguistics has this too (see here for first of several posts illustrating this with a Whig History). Once one removes the technicalia (important stuff btw), common themes emerge that have been conserved through virtually every version of CGG accounts (constituency, hierarchy, locality, non-local dependency, displacement) in virtually the same way. So, contrary to the impression ILAS provides, CGG is not an ever more complex blooming buzzing mass of obscurities. Or at least not more so than any other progressive inquiry. There are technical changes galore as bounds of empirical inquiry expand and earlier results are preserved largely intact in subsequent theory. The suggestion that there is something particularly odd of the way that this happens in CGG is just incorrect. And again, suggesting as much is a real disservice and an obfuscation.

Let me end with one more point, one where I kinda like what ILAS says, but not quite. It is hard to tell whether ILAS likes abstraction or doesn’t. Does it obscure or clarify? Does it make empirical contact harder or easier?  I am not sure what ILAS concludes, but the problem of abstraction seems contentious in the piece.  It should not be. Let me end on that theme.

First, abstraction is required to get any inquiry off the ground. Data is never unvarnished. But more importantly, only by abstracting away from irrelevancies can phenomena be identified at all. ILAS notes this in discussing friction and gravitational attraction. It’s true in linguistics too. Everyone recognizes performance errors, most recognize that it is legit to abstract away from memory limitations in studying the G aspects of linguistic creativity. At any rate, we all do it, and not just in linguistics. What is less appreciated I believe is that abstraction allows one to hone one’s questions and make it possible to make contact with empirics. It was when we moved away from sentences uttered to judgments about well formedness investigated via differential acceptability that we were able to start finding interesting Gish properties of native speakers. Looking at utterances in all their gory detail, obscures what is going on. Just as with friction and gravity.  Abstraction does not make it harder to find out what is going on, but easier.

A more contemporary example of this in linguistics is the focus on Merge. This abstracts away from a whole lot of stuff. But, it also by ignoring many other features of G rules (besides the capacity to endlessly embed) allows for inquiry to focus on key features of G operations: they spawn endlessly many hierarchically organized structures that allow for displacement, reconstruction, etc.  It also allows one to raise in simplified form new possibilities (do Gs allow for SW movement? Is inverse control/binding possible?). Abstraction need not make things more obscure. Abstracting away from irrelevancies is required to gain insight. It should be prized. ILAS fails to appreciate how CGG has progressed, in part, by honing sharper questions by abstracting away from side issues. One would hope a popularization might do this. ILAS did not. It made appreciating abstractions virtues harder to discern.

One more point: it has been suggested to me that many of the flaws I noted in ILAS were part of what made the piece publishable. In other words, it’s the price of getting accepted.  This might be so. I really don’t know. But, it is also irrelevant. If this is the price, then there are worse things than not getting published.  This is especially so for popular science pieces. The goal should be to faithfully reflect the main insights of what one is writing about. The art is figuring out how to simplify without undue distortion. ILAS does not meet this standard, I believe.


[1] The CGG as mysticism meme goes back a long way. I believe that Hockett’s review of  Chomsky’s earliest work made similar suggestions.
[2] In fact, few nowadays are able to identify a scientific method. Yes, there are rules of thumb like think clearly, try hard, use data etc. But the days of thinking that there is a method, even in the developed sciences, is gone.
[3] John Collins has an exhaustive and definitive discussion of this point in his excellent book (here). Read it and then forget about methodological dualism evermore.

26 comments:

  1. What changed is not CGG, but the sociology of the field. People have grown lazier, perhaps in light of the increasingly tight job and funding market that rewards publication and methods over theoretical insight. People no longer have time to invest themselves in understanding technical details unless there is a sweet and immediate reward at the end. So generative grammar falls by the wayside, not because it has evolved beyond the grasp of the scientific community, but because that community has regressed into a more theoretically simplistic and shortsighted one.

    ReplyDelete
  2. Thank you for this really thorough engagement! When most of what you see is comments from people responding to the title, every true engagement, whether agreeing or objecting, is a wonderful gift. Plus, I am familiar enough with FoL to take the review “did not experience extreme irritation” as quite flattering.

    First the title and tagline. My blood ran cold when I saw them. It was going to look like a betrayal against my tribe, like I had just published an article titled “Are Jews Really People?” But I think by this point most people with any kind of life on the internet know how titles work. The title that I think best captures the intention here would be “What’s at Stake in L’Affaire Chomsky/Everett: The Archetypes That Guide and Confuse” (No self-respecting editor would let such a title through)

    The narrative archetype, the villain/hero stuff, is what makes this a story that goes to the wider public in the first place. The scientific method archetype is what makes it interesting and also confusing. But it’s an archetype. I think a lot of your objections boil down to “none of this is unusual for normal science.” Still, evaluating “l’affaire” with respect to the archetype—which is the lens through which it is being projected in the public eye—is something that needs to be done to overcome the endless back and forth.

    Here’s a general response I gave through that hot new essay form, the twitter thread:
    -----
    Personally, I don’t think Popperian falsifiability is the be-all end-all decider for whether something is science or not.

    (To borrow an idea from dev psych’s attachment theory) It’s the parent you occasionally check in with to give yourself the confidence

    to roam more freely and widely. If you never leave the side of this parent you may never mature. Not roaming is a sign of insecurity

    But you do have to keep checking in. You might not be the best judge of when you’re mature enough to leave it behind

    Falsifiability is a central feature of the scientific method archetype. It’s part of an ideal, a useful one! But an ideal all the same.

    Evaluating the Chomsky/Everett debate w respect to that archetype goes a long way toward explaining what Chomsky is up to

    Something that’s devilishly hard to do for a popularizer. This is my attempt.

    I’m a linguist. Of course I think linguistics is a science. I also think any science of humans will be fraught

    And so difficult! Requiring both overly reckless departures from scientific ideals and overly stifling bean-counting returns to it

    Here’s a quote from @jsellenberg ‘s wonderful book on mathematics, How Not to Be Wrong.
    “One of the great joys of mathematics is the incontrovertible feeling that you’ve understood something the right way, all the way down to the bottom; it’s a feeling I haven’t experienced in any other sphere of mental life”

    OMG doesn’t that sound amazing? I would love to understand language that way! Doesn’t it make you jealous of mathematicians?

    I think a lot of linguists are after that. I don’t think we can get there, but I also don’t think we should stop trying
    -----

    Which brings us to abstraction, as Jerry Seinfeld would say, “not that there’s anything wrong with that.” Abstraction itself is not a problem for science. It is a problem for the popularizer, but not an insurmountable one. You can give an audience friendly account of many fields and deal with some of the more intense abstractions with a simple “the math checks out” and people will trust and believe you, (or more controversially, “the statistics check out”). Oh how I wish I could say “the feature checking operations check out” and move on to the point, but we are nowhere near that.

    I mean, the level of popularization I work at? I can’t even say “phoneme.” (It’s “speech sound.”)

    As far as popular accounts of the abstractions go, Pinker’s presentation of X-bar theory was about as good as it can get. Can I now push off the details to “the x-bar trees checks out”? Ha. And that’s the kind of thing that contributes to the “shifting sands” feeling about all this.

    ReplyDelete
  3. A more personal angle from the popularizer’s perspective. A few years ago I went to David Pesetsky’s talk at LSA about this issue just brimming with excitement and confidence that I could find a way to crack this “explain to the public” thing. He gave a very clear, well-laid-out presentation on a couple of important findings from CGG. This, for me, was one of those “mystical” moments I refer to, when I caught that shadowy glimpse. But I just could not figure out a way. Part of it was knowing how many assumptions would need to be torn down just as a preliminary matter (part of why Higgs-Boson or genetics is so much easier to popularize, I believe, than theoretical linguistics). Part of it was my inner editor’s eye seeing the most exciting headline I could imagine (“You Won’t Believe That These Very Different Agreement Facts About Two Very Different Languages Can Be Explained By a Single Syntactic Principle!”) and finding it not exciting enough for enough people. Do the same thing over 10 different languages? Maybe. Still gonna be a tough one though.

    ReplyDelete
    Replies
    1. My uninformed $0.02 from the sidelines:

      I agree that linguistics has an uphill battle to fight against preconceived notions. But at least we have very little overt backlash compared to other fields (creationism, climate science deniers, anti-vaxxers, and so on). And why should we expect linguistics to make much inroads into public consciousness to begin with? Much larger fields with much better infrastructure haven't had much success in that area, at least not at the level of depth that generative grammarians seem to be hoping for.

      Chemistry, for example, is not something you'll regularly see featured on pop-science blogs or TV shows. Despite tons of applications and a huge net benefit for society, the lay person knows little about chemistry except that there's different elements that somehow combine into molecules. I'd wager that the difference between organic and inorganic compounds, redox reactions, or Avogadro's law are virtually unknown even among the highly educated parts of the population. You also won't find much chemistry in sci-fi novels, a genre that loves physics and biology.

      Or take computer science. It's a really hot area right now, yet there is no CS equivalent of Cosmos that discusses the Turing machine, P ?= NP, or regular expressions (which are useful for everybody who works with text). Even niche formats like Vsauce on Youtube rarely touch on those things. There's some nice books like CS Detective, but for the most part coverage is very shallow.

      Considering its size, linguistics is doing fairly well in comparision --- Pinker and LanguageLog come to mind. It is generative grammar that has a problem getting its message out, but that's not surprising: it does not satisfy the human thirst for the metaphysical (in contrast to evolution and quantum physics), it can't be co-opted for self-help books or parenting guides (in contrast to psychology and L2 research), it does not make you live longer or healthier, and it does not satisfy human curiosity because most of the things that the layperson finds interesting about language (basically butterfly collecting) is completely orthogonal to the program.

      Quite generally, the analytical parts of linguistics will never be sexy, just like everybody likes to hear stories about Grigori Perelman but doesn't really care about the Poincare conjecture or how he proved it. If a highly respected field like math can't even get the most foundational concepts like sets and groups into the mainstream, there's little hope for binding theory or island effects.

      Imho popularization efforts for linguistics should have very modest goals: focus on debunking myths (Eskimo vocabulary hoax, Sapir-Whorf, split infinitives), the big picture stuff (descriptivism VS prescriptivism, universal grammar, comparison to animal communication), and the quirky (regularities of Pokemon names, how Middle earth's Quenya might have evolved into GoT's Dothraki, etc). Applications would be nice, too, but since we've left NLP completely to the computer scientists not a lot of sexy stuff will be coming from that corner any time soon.

      Delete
    2. I suspect that part of the issue is that people tend to get into generative linguistics when they’re relatively young. At this stage, the niche aspect can function as part of the attraction. (Of course Mom and Dad don’t understand what I’m working on — they’re idiots!) Before you know it, you’re 55 and trying to be respectable, and suddenly it’s not so great that no-one cares about your work. So, time to popularize. But I share Arika’s and Thomas’s skepticism about how easy that is.

      There may be advantages to flying under the general public’s radar in any case. People most likely would not vote for spending public money on research in generative linguistics.

      Delete
    3. Well, there is some of that kind of backlash. Ask the linguists who tried to step forward to defend the Oakland School Board during the Ebonics thing (Jerry Sadock, who was president of the LSA at the time even got death threats!) I actually think a lot of progress has been made on the getting the idea that grammar does not mean "school grammar rules" into the public consciousness. 20 years ago you would find in all the major media outlets huffy complaints about degenerate descriptivists and their harmful acceptance of bad grammar. Now that stuff is relegated to the small town local papers (and, of course, the comments sections). I think this is a major victory of public outreach!

      But it's an issue that's pretty easy to click into people's personal experiences. And you're right, most aspects of most fields aren't, so they just aren't going to get out there in the same way. I think when people read about a field they're not familiar with, they think they want to know "what does this say about the nature of the universe" but what they really want to know is "what does this say about me (and how can it help me lose weight/stop procrastinating)." No, sorry, that's more cynical than I really am, but that extreme framing does help nudge you out of the "everyone should find this interesting because I do!" mindset.

      Delete
    4. I think it is worth distinguishing different kinds of popular audiences. There are three to distinguish: (i) The general public, (ii) the public interested in general scientific and intellectual issues and (iii) the wider public of intellectuals (university, NIH, NSF, Wellcome foundation etc types). It seems to me that much popular writing aims less at (i) than at (ii) and (iii). The latter groups tend to read thinks like Scientific American or Quanta. The former group gets whatever info it has about science from the papers (and maybe Science Friday). I don't disparage any group or think that one level of interest is better than another. It's just worth keeping them apart if one's aims are not merely educational but also strategic. Linguistics would gain a lot by being well understood by all three groups. But as a practical matter, I suspect that (ii) and (iii) are the most important. These are people that find science interesting and want to know what linguistics brings to the table. Pinker's book was well read by this group and it did us a lot of good. Moreover, for this group, though individual gossip is useful (everyone likes some titillating stories) the ideas matter. Moreover, this group is more than equipped to understand a reasonable ling argument. I have played host to these people and they have no problem following the point. They may not buy it, but they understand it. Now, I think we have underserved this group to our detriment. Moreover, I think that we have shied away from just those issues that most interest them (indeed IMO what makes linguistics interesting period), and that is what it tells us about human minds and brains. What is really sad IMO is that the Chomsky stuff really sells, and not just because it is Chomsky. People care about what language says about us, and this is the issue that you rarely see in the popular renditions (at least of late). Remember, if Everett did not claim that his work was relevant for Universal Grammar (and the big psychological claims that this supported) nobody would have paid him any attention! It was not just going after Chomsky, but going after him ON THIS MATTER. Why? Because people care about THIS MATTER.

      Now, would this sell if done well? Yes. Why do I think so? Well, it has; vide Pinker. Also there is no problem selling stuff like Spelke and Carey in the educated press. If baby physical knowledge gets press, why not their linguistic knowledge. So, it is precisely by shying away from the cognitive implications of linguistics that we have failed to grab one of our most interesting topics.

      My bottom line: we should be aiming first at that part of the public that is already interested in this stuff. It is pretty big and we are not serving it well. We are not really telling people what we have found and what it says about us. That is too bad.

      Delete
    5. Tiny point of correction: Jerry Sadock has never been LSA President.

      --Rob Chametzky

      Delete
    6. Ah, thanks. I guess it had something to do with him being quoted in the paper as one of the LSA conference organizers about the LSA statement in support of the school board. In any case, the kooks looked him up.

      Delete
    7. Ariko wrote: Well, there is some of that kind of backlash [i.e. the kind that some sciences experience from creationism and anti-vaxxers]. Ask the linguists who tried to step forward to defend the Oakland School Board during the Ebonics thing

      True, this was backlash of a sort, but I think it's importantly different from the kind coming from creationists and anti-vaxxers. If creationism were true or vaccines really did not work, then (my understanding as an outsider is that) a lot of scientists' everyday work at the moment would be baseless and useless; that's not the case, of course, but for people out there to be trying to spread views that would have that effect on one's work must be particularly inconvenient. As far as I can tell the Ebonics thing was much less directly related to the everyday work of trying to understand the human capacity for language; questions about what's a "legitimate language" or how people should speak in schools are just irrelevant for that enterprise. (Perhaps some people working in this area were well-placed to provide some important input on the issue, but that's quite different from their work being based on a premise that the Ebonics issue should be decided one way or another.)

      I'm inclined to suspect that, quite generally, too much of the public face of the field comes from being associated with debates about the supposed "legitimacy" or "status" of various languages or dialects. To me, those issues are a long way from the field's important and interesting questions, and discussions about them can be unhelpfully bound up with too many other issues. Of course we're quick to point out that we're not in those debates to wag fingers at those who split infinitives or say "ain't", but if what we're most known for is simply wagging fingers at those who wag fingers at those who split infinitives or say "ain't", then we're still leaving out what I find to be the most interesting stuff. On this point I think I (yet again) agree with Norbert: Moreover, I think that we have shied away from just those issues that most interest them (indeed IMO what makes linguistics interesting period), and that is what it tells us about human minds and brains. What is really sad IMO is that the Chomsky stuff really sells, and not just because it is Chomsky. People care about what language says about us, and this is the issue that you rarely see in the popular renditions (at least of late).

      Delete
  4. I'm curious, if there was a new "The Language Instinct" for what you envision, how would it differ? Same big general ideas but further elaboration? Different priorities with respect to the big ideas? A laying out of something specific, like merge, as the essence of the instinct? For a type iii reader who has absorbed the concepts from that book about poverty of the stimulus, hierarchical structure, etc., what's the expansion?

    ReplyDelete
    Replies
    1. I think that I would take a look at the classic claims, put in some cute cross ling data to show that what is so in English is so in other langs (e.g. stuff on agreement and C to Co movement (though this might be a challenge), then add some stuff on recent acquisition studies and maybe some parsing (how islands don't show filled gap effects for example or how get reactivation of antecedent at trace position), and then end with some stuff on merge, how it most basic feature of NLs and then maybe sequel to some recent work by Dehaene and Poeppel and others showing brain signatures of this. The stuff by Moro on Jabberwocky etc might be fun to show too. This is easy to do and it plays well with the curious.

      So the aim would be to review what the cog near of lang informed by ling looks like today. That would be my pitch. Interested?

      Delete
    2. Have you already done something like this for say, a "type v" audience? Familiar with linguistics, knows how to look at structure in languages besides English (this to me, is a major hurdle even for talking to the motivated, interested, educated person) but maybe only took to the intro syntax? That seems doable, and I think most linguists would love to have the overview whether they're friendly to it or not.

      Delete
    3. Done stuff like this in a UG language and mind course. Worked ok. I think a dood popular treatment is doable. But, I dont think this is where the need is. We need to penetrate venues like SCi Am and Discover. Once we are there we address the audinece that is most politically useful for linguists.

      Delete
    4. I just mean that if you already have something worked out at that v level, than it's easier to see what would need to be tweaked to get it to a broader iii level (beyond figuring out a compact way to explain what is even meant by "agreement" and "movement" in this context, and I think that's a big one).

      Delete
    5. hmm. I have 7 chapters of a book written which is basically what Norbert just said. The last 2 chapters should be done soon. If anyone wants to have a look at what I've written so far, and wants to give me feedback, I'd be very grateful.

      Delete
  5. I wonder if the problem with the more general intellectual audiences is that the presentations are too long, too abstract, and too immersed in technicalities. I suggest as a model for quick persuasive force and intelligibility this argument for evolution from Darwin: the animals found on islands show close resemblances to those on the nearby mainland, with a strong bias towards types that can survive the trip, such as birds that can fly over the water, and reptiles that can survive a long rafting trip on driftwood. One longish sentence, using only commonsense concepts. I suspect that GG/UG will never be able to do that well, but we should be able to do better.

    ReplyDelete
  6. This comment has been removed by the author.

    ReplyDelete
  7. I think that sentence used to be something like "All human children acquire much more systematic knowledge of their languages than they have ever been exposed to." The Darwin sentence could then be linked to observable Mendelian stuff and fruit fly experiments, I'm not sure what the equivalent is for UG, maybe Williams Syndrome, Nicaraguan Sign Language? But still the proposed underlying mechanisms, the part that requires all those technicalities, simply has no good metaphors to hang on, clarify with, while the statistical approaches have great ones.

    ReplyDelete
  8. The best recent one was this embedding idea of merge: "an operation that can put two things together to create something that can then enter back into that same operation. And here's a clear example (some 'house that Jack built' sentence)." That's a good concrete model and a simple example to illustrate it. And yes, I understand that recursion is different from embedding, but what is the metaphor/model/example than can capture that?

    ReplyDelete
    Replies
    1. I think we need to begin closer to the beginning and rebut people like Morten Christiansen, who observe that finite state pattern learners can acquire the grammatical patterns of apparently center embedded structures, and therefore conclude that phrase structure has not been demonstrated to exist.

      So one of my thoughts for dealing with this kind of thing is a diachronic argument from the history of Germanic languages. Once upon a time these were SOV, like Latin, but a syntactic change happened, putting the V in second position. In Scandinavian, this happened to all clauses, but in West Germanic (German and Dutch), only to main clauses (technically, 'root'), including those coordinated with the by the 'coordinating conjunctions'.

      This shows (a) the distinction between subordinate and non-subordinate clauses is real, not a kind of grammar addict's delusion (b) nevertheless clauses are a kind of thing (with subkinds), and, most importantly, (c) the change did not care whether a subordinate cause was final in its containing clause (e.g. RC modifying clause-final object), or fully inside it (RC modifying subject). Finite state grammars can't use the same information to describe sentence-final putative subordinate clauses and sentence-internal ones, so can't explain how the change went in Scandinavian; therefore that flat structure idea is wrong.

      Definitely not in the same class as Darwin's argument, but, still, not relying on recent technicalities, but on terminology and ideas you can find on grammar web pages (specifically, discussions of the subordinating and coordinating conjunctions in German).

      When enough of the basics are tamped down well enough, then maybe we can move on to the bias for binarism, to the extent that it has really clear supporting evidence and arguments, which it does in some cases, such as the ones discussed by Pesetsky in his LSA talk.

      Delete
    2. Finite state grammars can't use the same information to describe sentence-final putative subordinate clauses and sentence-internal ones
      That's a dangerous argument to make because it conflates the device with a description of the device. If by finite-state device you mean finite state (string) automata, then you are right that they do not provide that level of generalization. But things are already very different if you look at monadic second-order logic over strings, which has the same power as FSAs. In MSO, embedded clauses do have a unified description.

      A less abstract format is the following: every FSA can be represented as an augmented transition network, i.e. a system of small FSAs, e.g. for NPs, clauses, etc, which may call each other at various points. In such a system, embedded clauses can be given a unified description. And as long as the depth of calls in the ATN is finitely bounded, the ATN simply describes a very complex FSA (althoug ATNs can be much more succinct than the equivalent FSA).

      Bottom-line: this is a very subtle issue that greatly depends on one's stances regarding cognitive reality, the level at which generalizations must be captured, and so on. Hardly a topic for a simplified discussion.

      Delete
    3. Hmm, so perhaps some different terminology would be better. Non-Phrasal? Flat? The issue of what generalizations are to be captured and why is exactly what the entire subject turns on, so it can't be ducked. And I don't even think it's really even that subtle, basically like the difference between an car that can ideally go on for ever except that stuff in it will wear out, or perhaps even more stringently limited because somebody has welded the gas cap on and it can't be refueled. In a sense it is subtle (as is the Darwin argument's use of the notion 'similar' applied to animals), but it is also reasonably commonsensical.

      Delete
    4. Another point here is that your concrete finite state description is in two components, the FSA's and the call depth limit(s), but we also know that the latter obeys universal tendencies and historical laws as discussed by Fred Karlsson and others, previously, in less detail. For the first, maximal depth is greatest for end-recursion and least for central recursion (proper embedding), with front-recursion in between, and for the second, that when a language starts getting written down, these limits start to increase, in the written language at least, for a period of about 400 years until they max out. This is a rather special kind of behavior that indicates that a different component is at work, with its own describable tendencies, whether or not you want to put it in the 'grammar' (the slowness of the change indicates that it involves a kind of learning, so it would not be insane to put it in the grammar, in a 'stylistic component' perhaps. Take it away, and the infinite idealization appears. And regardless if whether we put the restrictions in the grammar or not we might want to keep the infinite idealization, because the apparent acceptability frequency of the complex constructions falls of with complexity, and if we tried to describe it carefully, we might not want to say that there was an abrupt transition to zero frequency or acceptability. Do we even know that complex center embedding constructions are rarer than they ought to be on the basis of some vanilla assumptions such as that the statistics of overt constituent structure is context free (if the probability of an NP being a center-embedded in Modern Greek newspaper style is 1/1250, then the probability of one being doubly center-embedding is 1/1250^2)?

      Traditional generative arguments involving succinctness ('capturing generalizations') are often not fully convincing, because nothing further is shown to follow from the fact that some grammar, under some theory, but (as has been known for a long time by us, but probably not by many of the disciplinary neighbors), change can sometimes provide this something further.

      Delete
    5. Oops, "some grammar, under some theory, is the most succinct"

      Delete
  9. Several points:

    1. Okrent gets the point about linguistics not being homogenous at all very very well, especially the part containing
    "Many linguists use the same kind of evidence (native speakers’ intuitive acceptability judgments) and the same methods (hypothesising structures and constraints that account for them), but simply want to discover the rules of particular languages, or to examine how different languages handle comparable phenomena."

    For this reason, comparisons to hard sciences are pretty useless at this point because linguists lack the certainty of phenomena being studied that hard sciences have. systems biologists and molecular biologists at least can agree that they are working at different scales and be happy. Only within generative linguistics can that happen for ling (say, syntax vs phonology). "linguists" often have wildly diverging goals, not to mention those among those who take linguistics as a cognitive problem, or even a formal problem (see Thomas Graf's excellent FOL post about this problem in computational linguistics). Certain "generative theories" might be falsifiable, but even then under a whole range of assumptions which other linguists can (and do) just say are useless.

    2. on abstraction:
    There seems to be a conflation of "abstract" linguistic objects with mathematical objects. Linguists love to throw around the recursion and infinitude claims as being explanatory, despite there being very little evidence either of them beyond theoretical convenience (see pullum/schulz, kornai). On the other hand, relevant linguistic abstractions (like features, feature geometries or local/non-local dependency restrictions) which are very empirically testable, never tend to enter any popular debate.
    Abstraction does not need to be mathematical, and indeed in many sciences it isnt. When it is, there is very much a distinction between what is guaranteed by math and its mapping to whatever domain is using the math (Feynman in particular was always careful about stating this). Even Popper made a distinction between "logical truth" which doesn't need falsifiability, and scientific statements which do. Linguistic abstractions may gain insight from mathematical considerations, but are not inextricably linked to them. And often the reverse happens, where new mathematics spring from non-mathematical abstractions.

    ReplyDelete