Comments

Wednesday, May 7, 2014

Linguistic just so stories, plus ca changes...

A fearsome grouping of eight authors have recently combined (here) to try and drive a stake into the evolution of language industry.  The octet are Marc D Hauser, Charles Yang, Robert C. Berwick, Ian Tattersall, Michael Ryan, Jeffrey Watumull, Noam Chomsky and Richard Lewontin. This is a fearsome grouping and if anyone can cut off the endless rather fruitless discussion, these eight should be able to do it. The paper makes all the right points. It argues that making evo arguments pertaining to the emergence of cognitive novelties will require evidence that one should expect to be unavailable. They make the point that we know little about how minds embody cognition that undergirds behavior, less about how brains support the relevant mental modules and even less about how genes grow brains that support minds of the right type.  Consequently, any argument aimed at explaining how minds like ours, one with a faculty of language with its distinctive combinatoric characteristics arose (i.e. how genes combined to give rise to brains with minds that contain an FL arose), will confront very substantial obstacles. They further make the point that to date, not surprisingly, little progress has been made. They yet further observe that many papers purporting to address the evolutionary question presuppose that our relevant ancestor already had the combinatoric capacity whose emergence we are interested in explaining.  So, all in all, it seems that questions concerning the evolution of language remain pretty much where the Paris Academy left them 150 years ago when they banned further discussion of the topic. Just as they were then, the questions are either begged or shrouded in a very thick mist.

Many (most?) of these points have been made before. I still consider Lewontin's paper in the invitation to cognitive science volumes as the best single thing one can read on the topic. There he showed just how hard it would be to even begin to make an argument of the relevant form to bear on the evolutionary issues. Our octet repeats his points and brings them further up to date. 

There is one nice addition: a proposed proof of concept for the whole enterprise that is mentioned quickly at the end of the paper that I would like to flag here. The evo of language will require a story that goes from behavior to mental structure to genes. Hauser et.al. suggest that we demonstrate how this can be done for a simpler system before we tackle human language. Their suggestion is bee communication. We have a pretty good idea of what the bee communication system does and how it is structured everyone watches "Dancing with the Bees" I assume). Unlike human grey matter, one can go after bee brains and cut them up and grind them down and do what neuro types love to do without going to jail. In contrast to the human case, there are various close relatives to bees still flying around and the bee genome and brain wiring diagram is probably (I'm guessing here: from my arm chair I would guess that bee brains are smaller and genomes less involved, but who knows) easier to map than ours is. So, here's their proposal: show us how this bee communicative capacity evolved so that we can use it as a model of how to tackle the human language case. Not that human language and bee dancing are that close (they are not) but at least we will have a model taking us from behavior to genes. I LOVE this idea. I hope it catches fire.

Some questions are irresistible. How language emerged in the human species is one of these. Sadly, being intriguing is not the same as being tractable.  There is a tide in the affairs of science and one has to know when it's time to address a question and when it's time to wait. The French 150 years ago urged a moratorium. I suggest, given what Hauser et. al. say (and what Lewontin said so well before) that we put this question on ice, for, say, another 150 years.  



67 comments:

  1. I admit that I would have expected more quality from a paper that has 8 authors. Alas, not a single genuinely new insight is provided. Just more of what we have heard from Chomsky [and those few he cites in support of his view] for years now: everyone else is wrong but I [we] don't have a solution to the problem either. Disappointing.

    Coming just back from EvolangX in Vienna I think the worst thing to do would be to follow Norbert's advice and "put this question on ice, for, say, another 150 years". Too many people work too hard on some of the issues the octuplet deems unsolvable [certainly Tecumseh Fitch, former co-author of Hauser and Chomsky deserves much credit for his innovative, albeit not entirely uncontroversial, work that provides insights that go well beyond bee-communication]. Alas, not all is well in language evolution research. Most notably many language evolution researchers seem to have given up on waiting to get constructive input from linguists. This is of course an understandable reaction to Chomsky's ongoing denigration of language evolution research, discussed in detail here: http://ling.auf.net/lingbuzz/001592 and in my review article in JL: [A ‘Galilean’ science of language. Journal of Linguistics / FirstView Article / May 2014, pp 1 - 34 DOI: 10.1017/S0022226714000061, Published online: 07 May 2014] (I do not know how to link papers here but trust Norbert will offer a thorough evisceration shortly)
    One can only hope that linguists will resume a crucial role in language evolution research soon - what needs to be overcome is not scientific curiosity but the attitude that everyone who disagrees with me is an irrational dogmatist.

    ReplyDelete
  2. @Christina: (first, props to being from Dalhousie U. Some of my favourite folks are from Nova Scotia, and Kings College produces some outstanding scholars).

    so I'm curious, Norbert has just summarized the main arguments against a generous interpretation of the results from research into language evolution (a generous interpretation). We can all see the summary paper from which the contents of this blog post are derived. Can you summarize for us the main arguments for taking the current results from this kind of inquiry seriously, as well as the evidence that suggests that this (your?) approach is likely on the right track? even a brief sketch like the one above will do.


    I read your review of the McGilvray book and I think that it's unconvincing because what you've done is basically review a compilation of disjointed, informal, quasi-spontaneous discussions. For the record, I find McGilvray's way of interviewing and his "additions" to be awful. None-the-less, that it's a collection of chats makes your arguments about lack of evidence, lack of citation, and distortion sort of unimpressive. And I predict that it's going to leave people with a pretty bad flavour in their mouthes after reading because as soon as one shifts to the formal work (of let's say, the authors of the above paper; or piattelli-palmarini, Lenneberg, Fodor, Gallistel; even things like the transcript of the Piaget/Chomsky debate) what one finds is that there's quite a bit to the criticism you're opposed to.

    ReplyDelete
    Replies
    1. Thank you for your comments. Before I venture to give the summary you ask for can you please let me know what your definition of 'language' is. Norbert for example takes the Chomsky style language faculty for a given. I don't. So you and I could talk about quite different things when we talk about language evolution.

      I accept that you find my review of 'the McGilvray book' unconvincing. But let me ask you why you are seemingly untroubled by the fact that someone who has been repeatedly described as the greatest intellectual alive would participate for decades now in the kind of "disjointed, informal, quasi-spontaneous discussions" SoL is a paradigm example of? One can accuse McGilvray of many things if one wishes. But it is hard to find someone who is more enthusiastic about Chomsky's work. Whatever Chomsky says in an interview, McGilvray will write down [including all the laughter, false starts and irrelevancies]. One can debate whether some of the questions asked were well chosen but this one strikes me as entirely appropriate:

      "Noam, let me ask about what you take to be your most important contributions. Do you want to say anything about that?"

      Please tell me, why is in Chomsky's reply not even a hint of reference to his technical innovations that have set and re-set theoretical agendas for those who share his general perspective. Why, instead of speaking about his field-on-fires-of excitement-setting linguistic work does he make vague and confused comments about evolution and biology? Maybe McGilvray can be faulted for not insisting that Chomsky takes his question more seriously and just jotting down everything that was said. But does the nearly incomprehensible paragraph on pp. 76-8 strike you as the kind of answer someone would give who is currently engaged in all the formal work you want me to shift to? Would you give such an answer when asked to summarize your current work? Don't you find it just a tad odd that you fault me for not referring to the Chomsky/Piaget debate in the 1970 when I review a book written in 2012 which, allegedly, is "truly exceptional in affording an accessible and readable introduction to Chomsky’s broad based and cutting edge theorizing"?

      Finally, I do not think the kind of distortion [like of Elman's work] I describe should have room in any formal or informal publication by a leading academic. I am not at all opposed to criticizing the work of Elman [or Deacon or Boden or ...] - and if you have read my review carefully you know that. I am opposed to distortion, denigration, and dishonesty.

      Delete
    2. @Christina: thank for getting back at me. I think that an appropriate definition of what language is depends on what kinds of questions you're asking: sociological, political, literary, physiological. They're all valid with respect to their domains. For the purposes of studying language evolution we could study it with respect to all of these things. But for the sake of this argument, I take it that what we're talking about is how we got the biological capacity for this topic-specific kind of thinking we call language. some thing of the sort that Fodor, Chomsky, Norbert, etc have in mind. And I take it that evolution is an unintentional, deterministic, mechanistic process. (of course there isn't even consensus about what we should take natural selection to be, and I think Piatelli-Palmarini, and Fodor make one of many excellent cases about this).

      I'm not sure what you're objection to a faculty of language is. Without going into detail, I find the evidence convincing that organic organisms are made up of subsystems which are specified with respect to structure and function in every physiological dimension we understand, and I see no reason thus far to think that the mind must be an exception to this trend.
      ...

      Why does he participate in so many informal interviews?
      who knows. I don't really care and I'm not in the business of defending other peoples choices in these sorts of things. but my reservation isn't dependent on the answer to this question. it only suffices to note that if one objects to his lack of citations (for example) when he visits the Charlie Rose show, it comes off as weak because as soon as the reader checks out Manufacturing Consent (the text) one finds copious citations.

      Why does McGilvray ask such soft ball questions and does poor follow up? who knows. again, I don't care to defend his interview style. maybe he saw the book as being for basically internal circulation (internal to the generative grammar sect). everything else is kind of a publishers trick to make money. and if we were all judged for how we need to spin our projects to get them published and all that, I'm quite certain most of us would be pretty embarrassed. But again, I'm not excusing or defending it. I'm just saying: ultimately, who cares? if you find the content not useful, or confusing, ask someone from that camp to explain a particular part to you. I'd be happy to chat about the book in more detail with you. I'm a fellow Canadian philosopher / linguist living in Toronto.

      Why doesn't Noam play up his contributions when asked about them? Not to sound like a broken record but, who cares. who the hell is he anyway to evaluate his own contributions for us. we're all going to make up our own minds anyway regardless of what he says. In fact, i'm quite tired of scholars telling me they're the best thing since sliced bread, and I find sci modesty where ever I see it quite refreshing.

      I don't know anything about the distortion of anyones work in detail. I read the sections about it in your review, but I'm reserving judgement until I find some more evidence of the stuff. The whole Postal thing I find sort of hard to get heads or tails of since Postal is also a right-winger who writes articles for blogs dedicated to denouncing Chomsky's political work which are typically full of distortion themselves. So it's hard to know what the truth is.

      Lastly, I'm not faulting you. Although I disagree with your take on the text and the work, I'm just trying to say that you could make a stronger argument if you engaged with the strongest arguments from the generative grammar body of work. But as long as you attack the weakest links, but don't challenge the strongest, or fail to provide an argument against the strongest link, you're critique will flounder I think. Or it will be convincing for the wrong reasons. Winning an argument via mud-slinging is a fleeting-victory.

      Delete
    3. Just a few comments:

      First, as biologist I find the arguments by Fodor and others for modularity [of mind] very unconvincing. There is no point in me trying to convince you on a blog. If you're interested in what people who actually work on the brain have to say, Terrence Deacon's "The symbolic Species" is a good place to start.

      Second, I wrote a review article for a linguistics journal. That pretty much constrains what I can and cannot include. If you think there are strong arguments in SoL please list them.

      Third, unless you have excellent evidence to support that McGilvray "saw the book as being for basically internal circulation [and] everything else is kind of a publishers trick to make money" I recommend you refrain from such speculation. It is insulting to Chomsky because it implies he would be some incompetentling who is unable to prevent a greedy publisher from taking advantage of him. And, it implies CUP is soliciting manuscripts under false pretence - do you have ANY evidence for that?

      Fourth, I have found no evidence for modesty in Chomsky's answers. He dismisses the work of people who disagree with him [often accusing them of irrational dogmatism] and elevates himself to 'expert on all fields'. If he is as modest as you imagine an excellent reply to McGilvray's question would have been "Really now Jim, I should not answer this, let people like Max Baru make up their own minds".

      Fifth, I find your comment about Postal startling [especially coming from a philosopher]. He provides ample of textual evidence for every claim he makes. If you doubt his claims you go to the texts and find out if they are represented correctly. Next evaluate whether Postal commits any errors of reasoning. If this is not the case i recommend you take him seriously [even Norbert will confirm that Postal is and always has been an excellent linguist]. As for him being a 'right winger' you may want to apply the advice you give so freely to others: WHO CARES. It is utterly irrelevant to the linguistic arguments I have cited.

      Sixth, you accuse me of mudslinging. The dictionary tells me mudslinging is "the use of insults and accusations, especially unjust ones, with the aim of damaging the reputation of an opponent". Could you be so kind and provide examples where I am doing this? Especially cases of unjust accusations are appreciated.

      Delete
    4. @Christina:
      responses (to your numbered points)

      first: that's a spurious dismissal if I ever heard one. Plenty of neuroscientists take what you would call a generative-ist view of the mind seriously. Like take say the folks Andrea Moro has been working with. or Gallistel, or David Poeppel or books about neuroscience like 'Principles of Neural Science' 4th ed. by by Eric Kandel, James Schwartz, Thomas Jessell. Also, if you'd like to take a crack at refuting Fodor's arguments, I'd be very amused to see you have a go. Please, by all means.

      third: there's no argument here. only an assertion of opinion. you asked questions about peoples motivations, and i responded: that's all. seeing your work was all speculation about motivation, as opposed to published correspondence with McGilvray & Chomsky, I'm not sure why speculation is out of bounds.

      fourth: seems to me that's an evasive response. we aren't arguing about whether chomsky is modest in general, we were talking about one q/a set. which you are so fond of dissecting. modesty is pretty subjective anyhow, I suppose. being dismissive and being modest aren't mutually exclusive by any definition i know of. one can be perfectly modest but dismiss geocentricism for example.

      fifth: ideology & science have always been snug bed-fellows. speculation about a persons preferred scientific world-view and their politics have been fair ground since about the seventeenth century. Not that they ought to convince anyone in and of themselves. but they certainly raise eye-brows.

      sixth: ah...how about down below where you compare norbert to the east-german authorities during the communist era for example?

      you're entire argument is based on discussing personalities. any reader can review your comments on this thread and notice that. And you make no attempt to distance discussion of matters of fact with personalities. I'm sure Robert Boyle must be spinning in his grave...

      Delete
    5. @Max: my apologies; I was under the impression you were interested in a serious discussion of my review - my mistake.

      Apparently, you just search for reasons to generate conflict. How else could one explain your reply to my six [where IN THE REVIEW of SoL did I sling any mud at Chomsky?] with reference to a comment I made on this blog about something Norbert had said some 2 years after SoL was published - truly astonishing! I suggest you find yourself another victim, I have no interest in continuing this conversation.

      Delete
  3. In 2002, Chomsky wrote about some of Deacon's proposals concerning language evolution:

    "they seem to reshape standard problems of science as utter mysteries, placing them beyond any hope of understanding, while barring the procedures of rational inquiry that have been taken for granted for hundreds of years"

    That seems to me a quite accurate description of what Hauser et al. are doing in this review.

    If you want to read some of the responses from the community of researchers working on language evolution, check out Replicated Typo.

    Chomsky, 2002. On nature and language. Cambridge: CUP.

    ReplyDelete
    Replies
    1. Chomsky (and I) believe that at any given time, some problems are ripe for analysis. Deacon takes problems for which we have pretty anodyne kinds of solutions and substitutes for them mechanisms that are very hard to fathom: he treats languages as objects that exist independently of the speakers that speak them, for example. Here, I think that Chomsky has a point.

      There are other cases however, where it is less clear that we have anything interesting or non-trivial to say. I have always thought of the consciousness literature as like this. I also think that Chomsky et al have a point when they tell us that we have learned very little about evolution of language despite all the ink that has been spilled.

      Science is opportunistic. It is always worth asking whether a question we would like answered is ripe for investigation given our current tools and understanding. His view (and I agree) is that I see no reason to think that speculations about the eve of language is likely to get anywhere deep given what we would need to fashion a standard rigorous evo explanation. That's his claim. The way to refute it is not to say that it is unscientific but to produce non-trivial accounts that meet the standard criteria of acceptability. If Lewontin is right (and he IS in a position to know about these matters) for most cognitive capacities, these standards cannot be met and so we will get a bunch of degenerate accounts, viz just-so stories. THis just means that these questions, no matter how interesting you find them, are not ripe for investigation. This is a scientific judgment. It may be wrong, but it is an important judgement to make. Make the wrong one and you waste lots of time, both your own and that of other people.

      So, show us something non-trivial about the evolution of the discrete infinite property of the kind we find in human language. Here the Lewontin problem is even harder for we know of NO other beings that have it or anything like it. Not a good place to be for comparative biology. Moreover, it is not a trait that seems to vary much in humans; we all have it modulo pathologies. So, as Lewontin noted decades ago, it doesn't look like a promising area to look for an evo account. Is he right? I think so. Is it unscientific? Nope.

      Delete
    2. If I were Planetary Dictator, I'd impose a moratorium of 20-30 years, during which people would continue to work on learning more about the cognitive abilities of humans and other animals, and the differences between them. Maybe something could be done with the results. Crows, for example, do seem to have at least some ability to do something sort of like recursive planning (use a pebble to get a short stick to get a long stick to get the food), but in the video (http://www.youtube.com/watch?v=AVaITA7eBZE&feature=player_embedded), all the props they need are sitting around visible in their environment, while human recursion can proceed in the absence of environmental support. How would crows do if the resources were scattered around a large landscape, without mutual visibility?

      Delete
    3. This reply is to Norbert:

      You write: "[Deacon] treats languages as objects that exist independently of the speakers that speak them, for example. Here, I think that Chomsky has a point."

      Let me ask a very blonde question: why is it entirely legitimate that you reduce Deacon's 1997 book [in which he provided the ANALOGY you refer to] and all his work done since to this one statement on which you base your dismissal? Why is it not legitimate for me to criticize Chomsky's work based on countless statements by him because, allegedly, I fail to refer to his formal work [which is never cited an example of]? It would seem that when you expect people ought to take into account everything Chomsky has ever said before criticizing him, you ought to give the same treatment to others. So let me ask you: How much of "The Symbolic Species" have you read? How many of the articles and books Deacon wrote since are you familiar with? And the same questions apply to the work of all the other folks who publish on language evolution - unless you have read all of it how can you be so confident that "we will get a bunch of degenerate accounts"?

      You write "show us something non-trivial about the evolution of the discrete infinite property of the kind we find in human language"

      Just why do you think this is such an interesting property of human language? We find discrete infinity in sets like:
      [1] My father is dead
      [2] My father's father is dead
      [3] My father's father's ... [n] ... father is dead

      Exactly when was the last time you used a sentence like [3] for some very large [albeit not infinite] n in conversation [or in internal thought]? The [alleged] ability to generate endless sentences plays virtually no role in human language use - so why is it the most important task to figure out how it could have evolved [if in fact it did evolve]?

      Maybe you want to focus on the evolution of the [alleged] ability to understand a wide variety of previously not encountered sentences? Why would you want to study it in isolation from other cognitive domains? Even if one were to assume syntax is the essence of language, it does not occur in a vacuum but in embodied agents who use language [to communicate, to think, to problem solve, to...]. So all this work you propose to put on ice for 150 years actually is quite relevant to the evolution of the system you claim to be interested in...

      Delete
    4. Norbert, I might agree that it may be hard to show something non-trivial about the evolution of the discrete infinite property of the kind we find in human language. But then what we disagree on is the importance of that property, and particularly the questionable move of equating that property with all that might be interesting or non-trivial about human language, which is a complex adaptive system that opportunistically builds on a mix of biological, cognitive and cultural foundations.

      As Séan Roberts notes in his Replicated Typo post, "the dogmatic emphasis on discrete infinity as the primary or only language phenotype worth studying is extremely limiting. The concept of discrete infinity itself has been criticized by many (e.g. Hurford), and statements like “no matter how far apart [arguments] are from each other, the [hierarchical] association remains” seem absurd when looking at language from a performance perspective. There are plenty of other factors that have been suggested as important aspects of the language phenotype, such as capacity for massive storage (Hurford, at this year’s EvoLang), joint attention, theory of mind, power and kinship, social structure, to name a few."

      So, yes, when Hauser et al. proceed to (1) isolate some abstract property as the only thing worth investigating, (2) conclude that it's well-nigh impossible to study the evolution of *that* thing, and (3) in the process equate that thing to language and (4) frame the whole business of language evolution (with not so subtle slippage from language-as-discrete-infinity to language-in-all-its-aspects) as "mysterious", what they are doing is exactly what Chomsky was describing in 2002: "they seem to reshape standard problems of science as utter mysteries, placing them beyond any hope of understanding, while barring the procedures of rational inquiry that have been taken for granted for hundreds of years".

      Delete
    5. +1.

      Particularly when Norbert himself has several times said that he doesn't think that MERGE is the key property but LABEL instead. So why do you even agree with this paper?


      I admit to being mildly irritated that the paper contains at least 3 different versions of what this "discrete infinity" thing is meant to be.
      1 discrete infinity (i.e. any process that produces an infinite number of discrete objects whether it is recursive or iterative or neither)
      2 recursive merge (a combinatorial operation that can be applied to its own outputs)
      3 Turing computable (I guess, since there is a para lifted from the Watumull et al paper that we discussed here)

      Delete
    6. I'm all for isolating interesting properties of FL for study and I think it's of no particular interest to follow a research program that conflates them all. It is much better to have isolated too far and discover that _more_ than Merge is needed after all than to work at the hopeless task of explaining how the kitchen sink evolved.

      However I would submit that in that vein, Norbert, you ought to write a reply to this paper working toward dissociating Merge and Label in the sphere where these debates are taking place, if you really think the first can exist without the other.

      Delete
    7. @ewan, "a research program that conflates them all" is a throwaway description that does no justice to the many converging lines of research dismissed by Hauser et al. as irrelevant. Hauser et al. seem to have made an a priori decision that all that is interesting about language is one property the precise characterisation of which is contested and ill-defined. That the cumulative science of language in the light of biological and cultural evolution operates with a broader and more imaginative conception of language seems clear; but to say that this implies they conflate all of its properties is simply a strawman. They, too, isolate a range of interesting properties to study them; it's just that they don't think a single magic bullet is going to cut it.

      Most researchers in the field of language evolution believe that all aspects of language (including the phenotype of discrete infinity but not limited to it) should ultimately form part of the complex story of the evolution of this complex trait. Insofar as the ultimate goal of the language sciences is to understand language in all its aspects (one reasonable gloss of what it means to study language), ultimately we will have to strive for consilience and converging evidence. That will not be accomplished by pretending that only one property is important and that the evolution of language will forever be shrouded in mystery because this property is so abstractly defined as to be virtually impossible to study, as Hauser et al seem to suggest.

      Delete
    8. @ Alex: Whether it is merge or label the issue is how the kinds of unbounded hierarchies and dependencies we find in natural language arise. The Hauser et al paper conclude, rightly in my view, that little has been discovered that sheds light on this particular issue AND that we are likely to find nothing in the foreseeable future either given the exigencies of making a decent evo argument in this domain. As I said, I find the original Lewontin paper convincing enough. This paper reiterates its basic themes with a few additions.

      @Mark: you can study anything you want to study. And you can call 'language' anything you want to. However, linguist of my ilk are interested in how a very particular kind of system arose, the one with the very particular properties we have isolated. Now, it's pretty clear that there is general consensus, even among the Replicate Typo world that nothing of interest has been said concerning this particular feature otherwise we would not be upbraided for being so obsessed with it. But obsessed I am and that's what I want an explanation for. Of course, you can work on whatever you want, but it seems we agree that IF it is the kind of structures grammars have that we want explained NOTHING of interest has been said regarding IT!!!

      @Ewan: I goes your relentless open-mindedness has gotten the better of you. You were not always so generous and I don't think that you are better for it. Fluffy let a thousand flowers bloom research programs are not programs. TO my mind, they are a waste of time. Of course, there are times when you get lucky and find something. Were that to happen I would listen. And of course I will not dictate what you study. But to be told that I MUST pay attention to stuff that is unrelated to my interests for unless I do so it hobbles THEIR work. Really? Hauser et al will stop people from considering their issues because they rightly note it says nothing about the origins of syntactic structure? THe aim of scientific work is to clarify; get a grip on what kinds of approaches will answer what kinds of questions. Science is opportunistic in that it answers questions it can and leaves by the wayside questions it cannot. It is very useful to know that the kind of work currently being pursued in the evo world will not answer MY question. I would be happy if it answered one of yours so long as you don't insist that I give a damn. There are only so many hours in a day.

      Delete
    9. @Norbert - I fail to see in what sense I was being open minded. Not only was I agreeing with Hauser et al's rejection of certain approaches -- I said that the "language because everything" approach doesn't seem to have much of interest on offer as a resesarch strategy -- but I was also suggesting you go a step farther and reject THEIR claim! Not this paper, the principal idea about Merge. This really does represent the mainstream opinion within generative grammar and it would be worth having a little more reflective debate that follows the ideas rather than the camps. I can imagine a thousand things that could be said about Label verus Merge - all speculative of course but I see no reason to stop speculating. This is DOUBLY anti-open-minded. I have no idea how you read into this that I was somehow saying we should pay attention to pointless research programs.

      Delete
    10. First let me apologize for mistakingly thinking that you were open minded. Thankfully, I was wrong. It really is a problem is this day and age and I thought, wrongly, that you had succumbed.

      Ok, for where I disagree with Chomsky. I have discussed this in various places: first in the 2009 book and then again recently in a paper with Bill that will be coming out. I think that merge is not a simple operation but built out of two others, a combine operation of some kind and another operation (label) that effectively designates the class of possible combiners (label creates equivalence classes of potential combiners from a basic inventory). This is discussed in the book and I am currently working on other versions of this idea as I write (viz: I think that the right version of combine may actually be something like the set theoretic Union operation and that Labels are objects that can mapped into their unit sets so as to be unionable. I know this sounds caliginous but here's not the place to elaborate).

      But (you knew there would be one right) but so far as the evolution discussion goes, I don't see that these rather abstract differences have differential consequences. Either would come to the same conclusion wrt the points made in the Hauser et al paper. As such, wrt this work, it's not worthwhile dwelling on the differences in approach. 'Merge' is just shorthand for WHATEVER it is that got us the kind of recursive hierarchy and dependencies that we find in Gs.

      Is it worth discussing the differences in other venues? Sure, which is why I have presented an alternative elsewhere. I can assure you that Chomsky is not sold on it. He has reasons for his views. I have some for mine. We discuss. We disagree. We argue. That's part of the fun. However, given that we agree on far more than we don't, and given that any disagreement with Chomsky is seized upon by some as an indication that his way of framing issues has been proven false (and you've seen enough of this just on this blog) I don't like to press or highlight disagreements where they are not germane. It just muddies the intellectual waters.

      Last point: I like the idea of debating ideas rather than following camps. I've quite often disagreed with Chomsky on details. However, I believe that he has managed to frame the general questions almost perfectly. And on the big issues, like the one discussed in the Hauser et al paper, the arguments are entirely spot on. As for Merge vs Label, well maybe I'll talk more about it on the blog sometime, though I am afraid that it won't be nearly as much fun to read as this stuff is.

      To end: sorry for mistaking your position. BTW, if you can think of 1000 things to say about merge vs label, I'd love to hear them. I have been able to think of about 3.

      Delete
    11. Which of the three incompatible descriptions of "discrete infinity" in the paper is "spot on"? Or are you agreeing with all of them?

      Delete
    12. For my purposes, I take 'system of discrete infinity' to name those collection of facts that we put under hierarchical recursion plus the dependencies. So big structured phrases plus dependencies. The description given in the text is fine, but I think it is a bit of overkill. We have myriad ways of describing this, which, for my money, are all pretty good. I would be happy with most any of them for these purposes. But, to make me happy, some kind of MP characterization would be perfect.

      Delete
    13. I think this is an amazing admission:

      "given that any disagreement with Chomsky is seized upon by some as an indication that his way of framing issues has been proven false (and you've seen enough of this just on this blog) I don't like to press or highlight disagreements where they are not germane. It just muddies the intellectual waters."

      I am reminded of almost forgotten requests not to deviate from the party-line in East Germany so the Klassenfeind would not seize upon opportunities to undermine pseudo communism vs. what one would expect from scientific discussion. Just why would Norbert assume that when he disagrees with Chomsky this will be 'seized upon by some as an indication that Chomsky's way of framing issues has been proven false' - as opposed to thinking that Chomsky is right and Norbert wrong? Presumably he does not quite understand that outside the Galilean camp people evaluate ideas based on their merit not based on who proposes them, less based on whether Norbert agrees or disagrees with them...

      Delete
    14. Though I promised myself that I would henceforth refrain from dealing with any of CB's many comments, she has made it impossible for me to do so. So, consider this an exception to my general rule NEVER TO BE BROKEN AGAIN: I hereby nominate CB's last comment (immediately above) for funniest least self reflective piece of prose posted on this website to date. I once noted that one of the pleasures of running this blog has been the deeply amusing comments of some of the participants. You can't make this stuff up.

      Delete
    15. @ norbert regarding CB. It seems to me you expose your cowardice in avoiding her. She has hit the nail on the head in every post. Her arguments are lucid and fair; yours seem snarky and defensive.

      Delete
    16. I am replying to the remark "For my purposes, I take 'system of discrete infinity' to name those collection of facts that we put under hierarchical recursion plus the dependencies. So big structured phrases plus dependencies."

      This is too vague: we are talking about the biological evolution of homo sapiens in which there is a particular trait. This is the cognitive ability to learn, understand and produce utterances in various languages, those languages having some particular recursive structural properties. Here you are talking about the structural properties of the languages, which are at least the result of various processes of cultural change. So you seem to be mixing up the properties of the organism with the properties of the languages, which no matter how much you want to focus on competence and I-language etc, is a confusion. Hauser et al focus on a cognitive ability but they aren't clear about what it is: here are some possibilities

      a) it is the ability to use discrete symbols
      b) it is the ability to put two discrete symbols together to make a complex symbol
      c) it is the ability to perform arbitrary Turing computation
      d) it is some learning ability
      e) it is some extended Theory of Mind; the ability to recognize the intentions of your conspecifics.

      The whole point of the paper is that we need to focus on the evolution of a particular ability, but I have no real idea of what that ability is, according to these authors. And I think the reason for that is that the authors don't actually agree amongst themselves, and there are clear signs of internal contradiction in the paper -- like the appeal to parameter setting models of acquisition (Baker, Yang, ...) which is extraordinary in the context.

      Delete
    17. @joseph: yup. That's me. Snark and cowardice.

      Delete
    18. @Alex: how's about we start with (b) with the proviso that the output of the operation that puts together discrete symbols can be input to the operation that puts discrete symbols together to produce a discrete symbol. Find me a story for how this arose in the species. After we have this we can of course go on from here, but frankly, this would be a great first step.

      There are some, and I include you here Alex, that look for ways to misunderstand. They think that the distinctions they complain have not been crisply made matter. Sadly, they do not. The thing Hauser et al care about is perfectly obvious, at least to me. Give me some recursive structure building. That's the LEAST capacity we have. The theories out there have nothing to say about this. After we get by the basics, we can discuss fancier stuff. I actually refuse to believe that you really don't understand that this is the big game being hunted. But if you really didn't, well here it is. Good luck and call me in 150 years and we can talk again.

      Delete
    19. At least now we know who to laugh at. Thanks for your honesty, except you forgot to own up to defensive.

      Delete
    20. @Thomas: Dankeschoen :)

      @Alex C: I admire the virtually infinite amount of your willingness to give Norbert the benefit of the doubt. But at one point even someone with your Engelsgeduld has to accept the obvious: Norbert defends the paper because Chomsky is one of its authors. Had exactly the same paper been written by say Arbib, Behme, Christiansen, & Deacon [not that any of us would publish such nonsense on stilts] Norbert would be all over it and point out every flaw in reasoning...

      In case you are willing to assume that Norbert has been struck suddenly with partial amnesia and really does not remember that he himself provided THE "story for how [Merge] arose in the species" re-read the discussion we had on this very blog some time ago:

      http://facultyoflanguage.blogspot.de/2013/01/darwins-problem.html

      The key words Eve and miracle mutation may ring a bell. Now if one assumes Merge [or the [b] with proviso above] could arise in one single mutation in the species [=accepts the Chomskyan orthodoxy] why on earth would one spent any more time theorizing about it - or write a paper scornful of those who are interested in the evolution of what Chomsky calls disparagingly 'externalization'? The problem of language evolution is solved as far as Norbert is concerned.

      Delete
    21. @CB "Engelsgeduld" is a good word to be sure.

      @Norbert It's true that I misunderstand a lot; In this case I know the root of my misunderstanding, namely that you co-authored a paper in the very same journal only a few months ago ("on Recursion") that argued for (c),
      "This thesis was based on the standard mathematical definition of recursion as understood by Gödel and Turing, and yet has commonly been interpreted in other ways, most notably and incorrectly as a thesis about the capacity for syntactic embedding. "

      Publishing a paper saying it's c not b and everyone who thinks otherwise is a complete idiot (even though eg. Fitch clearly doesn't agree) and then praising this paper and claiming that it says b not c, and then accusing me of willfully misunderstanding ("it's perfectly obvious")
      ... well all I can say is it's lucky I have got Engelsgeduld to spare.

      More productively can we talk about the language acquisition section in this paper? I don't see how that fits in..

      I also found this bit very puzzling:
      "From these formal systems it is possible to deduce linguistic universals as consequences, thereby generating empirical predictions.:" followed by some stuff on Cinque and a "limited range of variation". I really don't follow that.

      Delete
    22. @Alex: boy you got me. I feel so sheepish and defensive and snarky. How could I have failed to see this. Thankfully you and CB and others have finally cleared my vision. The scales have fallen from my eyes. I now see that I will have to give you the last word on these matters. For the cognitively limited, like me, the idea that language creates recursive hierarchy seems blindingly obvious. It takes real sophistication to see through this. But as I cannot understand what you are talking about in this matter, I think that further discussion, from me at least, is pointless. I hope you and CB enjoy yourselves here on in.

      Delete
    23. I agree that language has recursive hierarchy; that is really not in question.
      The question is whether that is all that is going on; I think it is only a small part and that recursive hierarchy is not unique to humans nor to language.
      And that therefore the complete focus on this aspect of language in the paper is a big mistake; since that is not the major part of what evolved.

      Moreover, this explains absolutely nothing about language, not about its structure nor its parsing nor its learnability and makes almost no predictions (all languages are infinite? Is that a prediction).

      Given that this is the central claim of the paper, it seemed (and still seems) appropriate to try to pin down precisely what is meant by recursion here, but the way I put it was unnecessarily snarky.

      Delete
    24. This comment has been removed by the author.

      Delete
    25. (Fixing a glaring typo.)

      Hi Alex: A couple of points on what we called the Language Phenotype, from my own point of view.

      On the definition of recursion. I think this has been a distraction. I think most people agree that language has some combinatorial process that generates an infinite range of expressions. This fact does not go away even if someone gives a mathematically bad definition of it (or call it "recursion"). For me, at least, this fact has to be accounted for by any theory of language, including a theory of language evolution.

      On the issue of variation and acquisition. An important finding is that principles deduced from the study of a small set of languages have considerable power in predicting the grammaticality of sentences in previously understudied languages. Not trivial examples such as the permutation of "John likes Bill", "John Bill likes", "Likes John Bill", but complex structures interacting with phonology, morphology, semantics, etc, which are numerous in the pages of the linguistic literature. Language acquisition shows much the same thing: children do not make arbitrary mistakes when they learn language. There are theories for these: I think they have been successful, at least useful, but by no means perfect. But these theories being wrong, or insufficiently formal, or incorrectly attributed to a species specific UG, does not undermine the fact of limited variation, which, on my view, should also be accounted for in any theory of language, including its evolution.

      At least some of the authors of our paper have been to EVOLANG. I think Bob may have been at the very first one in the late 1990s--I remember being jealous--and I was at the meeting in Kyoto a couple of years ago. My impression is that these key empirical facts of language have not been widely discussed: the notion of language reminded me of the Blocks World and SHRDLU. (I think you expressed the same view in the comment section of some blog.) This, coupled with the likely intractability of the study of evolution of cognition--"recursive" enough?--as Lewontin forcefully pointed out, formed the basis of my critique.

      Delete
    26. This comment has been removed by the author.

      Delete
    27. @Alex C: I think you asked fair questions and did so very politely; but maybe you did not ask the right person.

      @ Charles: as far as I can tell, the author of 'Replicated Typo" remarked that none of you 8 authors had been at Evolang X:
      "The absence of all the authors from the recent Evolution of Language conference is notable, where they might have been able to interact with current research and learn of new discoveries that could address some of their problems..."
      We are aware that some of you have attended earlier Evolang conferences so I am not sure why you see a need to point this out?

      I agree that there could have been a lot more focus on linguistic issues at this recent conference. But it is not true that no one reported on issues you consider important. So sadly, the timing of your paper is quite awful and you cannot blame people for complaining that you did not make an effort to participate in the latest meeting of language evolutionists before publishing something that many must find rather offensive. Further, I think the attitude of Chomsky and Lewontin [I am not sure about the other authors], that we may never be able to find a solution to the problem of language evolution and therefore we should not even try, scientifically irresponsible. No one denies the problem is difficult but who are we to say that no human ever will solve it [as Chomsky repeatedly has done]? If people had not tried in the past to find solutions to seemingly unsolvable puzzles we would not use the internet right now to exchange ideas but presumably still use bows and arrows to catch our dinner. There is of course one way to ensure we will never find a solution: not even trying.

      Now if you have the impression that [what you consider] "key empirical facts of language have not been widely discussed" then submit a paper, organize a workshop and I guarantee you people will listen. Of course you should be prepared for the possibility that some will disagree with you about what the key empirical facts are. You can take the Hornsteinian approach and pout if people disagree with you or you can listen and find out WHY they disagree...

      Delete
    28. @Charles, I agree with you about recursion -- it doesn't need a precise mathematical definition as long as we are clear about what sense of the word we are using.

      On the variation problem, I think there is a really big question about the extent to which one should assume that typological generalisations should be baked into the genome (be part of UG). My own view is that this should be a last resort and that there are many other possible explanations that should be explored first (common descent, functional considerations, iterated learning effects, cultural evolution effects and just plain random chance).So I would be happy to come up with a theory that explains evolution acquisition and processing, and does not explain limited variation.

      But I am sympathetic to parts of the critique: I think some of the evolang working is missing the point; they don't really accept just how complex language is. It's fine to start off with very simple "Blocks World" type models (a la Luc Steels etc) but there doesn't seem to be any progress from that to the kind of large and complex mildly context sensitive grammars/lambda calculi that we actually need for language. But this may reflect my ignorance of the state of the art of the modelling work in evolang.

      Delete
    29. @Alex: just a few thoughts. As i said in my very first post on this topic: I think it would be hugely desirable if more people who focus on the complexity of language would participate in the Evolang debates. Someone like yourself who has both expertise and a non-confrontational attitude certainly could make important contributions...

      However, i think you are being rather unfair here:

      "It's fine to start off with very simple "Blocks World" type models (a la Luc Steels etc) but there doesn't seem to be any progress from that to the kind of large and complex mildly context sensitive grammars/lambda calculi that we actually need for language"

      Compare the amount of time and manpower that has gone into the Chomskyan generative enterprise [60+ years and thousands of professionals] to the time computational modellers have spent so far. Yet you seem to expect that they come up with results that are not only equal but superior to what Chomskyan linguists have managed. According to Norbert there is no complete Chomskyan model of human language [or even syntax], just some fragments of English are well understood. So it seems rather unfair to expect more than fragments from Luc Steels [who represents just one approach to modelling] who only worked a tiny fragment of the time with massively fewer people than Chomsky on the problem.

      Further, there are serious linguists who doubt that the generativist approach is on the right track at all. You may recall that I am still waiting for the demonstration that the generativist analysis is superior to a non-generativist one, in accounting for a very specific phenomenon of English that David Adger promised quite some time ago [this is NO criticism of David, he at least attempted to engage with other views. But none of the leading experts who contribute to Norbert's blog have responded - so one has to wonder whether there IS a response]. It remains at least possible that Chomsky-style UG turns out to be non-existent. If so it wouldn't be a bad thing if no one tries to model UG. And modelling is not all that is discussed at Evolang - even in the 5 or 6 years i have been seriously interested in the topic there has been a lot of progress in brain research, acquisition work etc. So the dismissive attitude of Hauser et al. seems rather out of place...

      Delete
    30. Apologies Christina. I'd forgotten about that. Once I'm no longer Dean, I'll re engage properly, but you could take my paper with Jenny Culbertson in PNAS as an argument that representing hierarchical structures mentally (which I take to be the generative view) is superior to representing transitional probabilities (one non generativist view, at least).

      Delete
    31. @david. Why so exotic. I liked your paper, but do you know of a non generative analysis of island effects, or C agreement in Irish or binding effects or superiority effects or strong crossover effects or...You get the point. Has the discussion become so corrupt that the results of the last 60 years are not worth mentioning? CB and her crowd completely disdain the work we've done and the discoveries we've made. Conceding that we have nothing to show for our efforts but what we published last week is a ver bad idea rhetorically as well as just being plain false. There are many competing generative analyses of many things, no viable non generative analyses at all. It would behoove us all to insist on this obvious point.

      Delete
    32. My little experience has been that trying to convince non-generativists that "island effects", etc, are Real Things in the manner that generativists study them is not easy. The implication always delivered is that the last 60 years really *aren't* worth mentioning, although the discussion mysteriously seems to trail off when it comes to, you know, trying to explain all the things that the phrase "island effects" represents without referring to island effects. We are to pretend they are not Real Things until some anthropologist has discovered their cultural roots and taken a survey or something like that.

      I've learned that most of the time it isn't worth arguing but just to hold one's cards closer to one's chest so to speak. Some people will never be satisfied until one adopts the practices of a certain kind of scientific cargo cult.

      Delete
    33. @David: thanks for the reply [which did not strike me as *exotic* at all]. As I said there was no need for apologies, we all do get busy at times with our real jobs. I'll have a look at the paper.

      As for the entertaining rant by Norbert: thank you for calling people like Paul Postal 'my crowd' but that is really way too much honour. I also do not recall ever expressing disdain for genuine discoveries that have been made. Saying there is an alternative to X is not the same as saying X is worthless crap [at least not in the non-Hornsteinian universe].

      Talking of alternatives: how about the work of Kluender, Hofmeister, Casasanto, Gibson and many others on processing complexity results - showing that these basically come down to working memory limitations [as discussed for example in the Kluender and Co. LSA talk earlier this year]?

      As for strong crossover, have a look at Postal's Skeptical Linguistic Essays. He argues that there is no generative "account" of strong crossover. But his work has been published a few years ago. So maybe you can provide the Hornsteinian explanation of the full range of data Postal claimed is unavailable?

      Delete
    34. I guess that paper was on my mind, and it rather directly speaks to the issue that linguistic knowledge is not stored as a set of statistical surface properties. But Chrisina, you might also be interested in my 'syntax for Cognitive Sciences' on lingbuzz at http://ling.auf.net/lingbuzz/001990. It's essentially a summary of syntactic research over the last 50 years, so it's very superficial (and the journal want me to cut 30% of it out, so it will get more so!) but it does try to say why generative syntax is important for thinking about human cognition more generally. I'll have the revised version soon, so let me know if you want to see that.

      Delete
    35. There are and always will be climate science deniers in every domain of inquiry. If you/we refuse to defend what we have found and explain why it is important then why should we be surprised that others think it's not. Generative Grammar (and Postal IS a generative grammarian, indeed the discoverer of cross over phenomena) has made non trivial discoveries about how grammars are structured. NONE OF THESE PROPERTIES HAVE BEEN EXPLAINED IN ANY OTHER WAY!!! So, when asked about a result, there are tons to choose from, and we should make the other side confront these.

      Delete
    36. Fair enough. :) As you might recall, I was way back when (the when being about 2008 or 2009) complaining that the online world had been left to, mmm, the sort of people who think that finding an example by Googling for something summarily defeats a * in a linguist's illustration of a phenomenon.

      But I've since realized (by my change of working environment) that another problem is that generativists often don't speak the kind of language that even open-minded non-generativists and scientists in other fields recognize as "science". Or mathematics. The idea that language is a field of inquiry that might need a sui generis working vocabulary and methodology is not going to occur to people outside of the field, especially if other fields of psychology use superficially similar tools. Why would it?

      You might sort of see it in the fact that CB felt it necessary to explain with a helpful example that people don't literally make infinite embeddings, since this piece of information was evidently missing from the discussion. Therefore (am I misunderstanding the logic?) discrete infinity is not a scientifically important phenomenon, at the very least.

      Let's just say that she's not the only person I've met in real life who has felt the need to explain that in these sorts of discussions. Since I spend a lot of time in machine-learning circles, the temptation to say that because it isn't REALLY infinite, a good representation of human language as a mental object can just rely on a relatively crude system that learns up to a particular length.

      I just co-taught a software project course on building a system that generates language probabilistically for a *limited* domain of use, and even realistic *short* sentences are FREAKISHLY difficult to produce without the aid of an explicit grammar (what most AI generation systems are using). The result doesn't exactly shock me, because it's obvious that it doesn't have to go to literal infinity to take on the effective characteristics of being infinite. It can get "bad" very quickly. Everyone in computer science knows that, but *connecting* it to the reality of language is the kind of basic-level explaining that linguists needs to do.

      As I mentioned to you privately, now that I've had this post-Maryland experience, I finally sort of think i've figured out how to present syntactic theory to people who aren't linguist-linguists and won't be, but will be psycho or computational linguists, maybe. Well, at least I'm running a sort of experiment trying to teach the "mentality" to young comp ling researchers before they develop certain mental habits, heh.

      Delete
    37. This comment has been removed by the author.

      Delete
    38. @David: Thank you, yes please keep me posted on that paper.

      @Asad Sayeed: Next time you co-teach [or solo-teach] anything on GG you may want to make sure you advise your students that what Norbert says above is utter nonsense on multiple levels. And while you're at it dispel several of the urban myths regarding Paul Postal people like Norbert like to perpetuate. Specifically make sure your students learn that:

      1. Postal worked at MIT from 1961-1965 and greatly helped to establish the early version of Chomskyan TG. He left MIT because he wanted to not because Chomsky forced him to [in fact at the time Chomsky wanted him to stay]. So pace wide spread urban legend Postal's leaving MIT is no reason for any animosity between him and Chomsky.
      2. Postal lend his support to the generative semantics 'movement' of the late 1960s, early 1970s which opposed certain aspects of the Chomskyan approach because he was convinced at the time this was the more promising approach [assign "The Best Theory" to your students]. You may wish to include some discussion of Chomsky's reaction; calling Postal's theory the 'worst theory' [not merely worse than his own but THE worst]. One fairly accurate account of this history is given in Goldsmith&Hook's "Ideology and Linguistic Theory: Noam Chomsky and the Deep Structure Debates", make sure you assign the interview with Postal.
      3. Postal has never been a generative grammarian if generative grammar refers to I-language [=a part of human biology]. He is a Platonist about natural language and rejects the idea that languages are biological objects or generated by biological objects. He believes of course that humans can learn languages but believes it is the task of psychologists not linguists to figure out how kids acquire knowledge of language.
      4. His own work led Postal to reject generative accounts of grammar and develop his own framework. You may wish to assign Postal's 2011 book "Edge-based Clausal Syntax: A Study of (mostly) English Object Structure", make sure the students read the introduction re Barrel A grammars.
      5. At this point it may go without saying but just to be sure, emphasize that anyone who claims, as Norbert did, that Postal IS a generative grammarian is either completely ignorant of Postal's recent [= post 1974!] work or deeply immersed in wishful thinking which denies anything that threatens one's ideology.

      Now in case you do not have the time to teach history, you need to ensure your students learn at least that the strong crossover phenomena Norbert refers to have no explanation in what he calls "generative" accounts and that the current most general explanations of island phenomena, taking a wide range of corpus, lab psycholinguistic and introspective evidence account, point away from a structural account.

      Delete
    39. An observation: I think the generativist/non-generativist distinction is being used in several different senses here:

      Is a generativist someone who

      uses formal generative grammars
      thinks linguistic theories are ultimately about cognitive abilities
      works in Mainstream Generative Grammar

      Delete
    40. For the purposes of the discussion I think we're talking about 2 and 3 to the extent that 2 and 3 overlap. At least, that was my impression. Someone in categories 2 and 3 might also use 1. IMO probably should, but that's another story.

      Delete
    41. @Christina: You are right to suggest that I wouldn't have time to cover the history of these arguments in that degree of detail. The students will for the rest of their stay here have exposure from almost the entire remaining department in other approaches, in any case.

      As a side note: I find the ideas "not generated by a biological object" and "yet learnable by children" to be as counterintuitive a claim as any that I have heard a "mainstream generative grammarian" make.

      This discussion has, however, inspired me to assign this Hauser et al. paper for a student presentation (they're required to make presentations under my supervision)---to a student who is an evolang enthusiast and no shrinking violet, with the instruction to make the discussion "fun".

      On your last point, I actually *do* corpus and lab psycholinguistics studies of linguistic phenomena from a functional perspective (in the sense of working memory/attention/UID and other such concerns) as well as applications and have a bit of familiarity with the literature you're talking about...and I would say that we are far from a position in which functional accounts can supplant structural accounts---particularly of the full range of island phenomena---because they are answering different questions. The starting problem (and communications disconnect, put another way from the way I put it in previous posts) is deciding what it means for a functional account even to supplant a structural account, what it means to "explain", and so on.

      Delete
    42. @Alex: You are absolutely right that there are all kinds of ambiguities in the term 'generative grammar'. As far as Postal is concerned: his current work does not fall under any of the 3 definitions you offer. And if he says he is not a generative grammarian, I see no reason to doubt this.

      It is not surprising that Norbert refuses to accept this. Norbert has basically unlimited trust in Chomsky and Chomsky told me in an e-mail exchange that "Edge-based Clausal Syntax", which he said he had not even read, must be a work of generative grammar because Postal is a generative grammarian. IMHO this claim is an indication for the virtually religious dogmatism of a man who cannot accept that people are capable of changing their mind about the best way of doing linguistic research [in Postal's case moving from something covered roughly by your first definition to a non-generative framework].

      One more thing: Chomskyans like to create the impression that anyone who rejects generativism has to accept a constructivist account [of say someone like Tomasello]. This is of course not the case. Non-generativists come in many stripes and if Norbert wants to claim that none of them can account for the phenomena he wrongly claims Chomskyans can account for he has to do a lot better than demolishing a strawman or two...

      Delete
    43. @ Asad Sayed:

      Before getting into any detail I would like to clarify that I understand correctly what you assert here:

      "I find the ideas "not generated by a biological object" and "yet learnable by children" to be as counterintuitive a claim as any that I have heard a "mainstream generative grammarian" make."

      Are you saying you find it counterintuitive for any X to be learnable by children if X is not generated by a biological object? In other words to you believe children can only learn about things that have been generated by biological objects? If so, how do you explain children acquiring knowledge about geography, astronomy, quantum physics - just to name a few? Mountains and rivers, planets and galaxies, etc. etc. are not generated by a biological object, yet we seem quite capable of acquiring knowledge about them ...

      Delete
    44. Far simpler.

      Children learn (acquire) language and from that acquisition, produce language. Biological objects are the only entities that produce language, and they only produce it after a period of acquisition (whatever that period may be). Children learn *about* geography, but they do not produce continents. They might produce *language* about continents, however.

      This seems to me to be a relatively elementary distinction. So in a sense, yes: I find it counterintuitive for any X to be learnable (acquirable) by children if X is not generated by a biological object. I do not find it, however, counterintuitive for Xs to be learned *about* without being generated by a biological object. I am happy to say that there are many things that are not generated by a biological object that people learn *about*. Using language, visual aids, direct experience, etc.

      Delete
    45. "Biological objects are the only entities that produce language, and they only produce it after a period of acquisition (whatever that period may be)."

      May I assume from this that you reject strong AI? In other words if a hypothetical computer/robot produces something indistinguishable from English [=passes the Turing Test] it would not count as language because it is not produced by a biological object? On the other hand what is produced by a well trained parrot counts as language because the parrot is a biological object?

      Delete
    46. I don't know how you got that implication from what I wrote. If and when someone builds a strong AI that can produce human language and has the characteristics of a human mind, it would ultimately have been built by a biological object and connected to the history and genesis of biological objects that produce language, and the explanation of its capacity would be intrinsically connected to the explanation of the capacities of biological entities. When it is self-replicating, it will presumably become some kind of chapter of the evolang story.

      The parrot produces language in a "broad" sense, but does it produce and acquire language in a manner that a human child produces and acquires language? Would it have done so if there weren't a human being to teach it language?

      I'm not entirely sure where you're going with this. Are you attempting to convince me that language exists independently of biological objects? Would it have existed if no humans had existed? Perhaps it might, if say squid evolved it. How then? Would it look like the language that humans produce? Why would it?

      It's all very esoteric to me. And very science fictional. May I interest you in a sci-fi reference?

      Delete
    47. Can we not have this discussion thread-jacked onto Platonism in linguistics again? Please?

      Delete
    48. @Alex: thanks but no thanks - I have seen quite enough already. But I am eagerly awaiting Norbert's report of the strong crossover results that Postal is ignorant of. THAT should be worth our undivided attention.

      Delete
    49. @Alex: Sorry. I thought it would go somewhere more interesting. *shrug*

      Delete
    50. Going back to the evolang thing; a good example of where I agree with the critique is this paper "Linguistic structure is an evolutionary trade-off between simplicity and expressivity" from Cogsci 2013 by Kenny Smith and some others. (available here)
      So this a really good paper I think, (so I hope the authors don't mind me critiquing it a bit,) there are some well thought out simulations, it's looking at a central problem, BUT the languages it is looking at are really simple. (a two letter alphabet and all strings are of length 2).

      So of course, I take the point that it is appropriate to start with a really simple system, and that simplifies the computational costs, and so on. But yet I do feel dissatisfied by just how oversimplified the "language" is.

      Delete
    51. @Alex: maybe it is because I actually attended Evolang X that I am getting the feeling you're beating up a dead horse. The issue that many of the models are based on hugely oversimplified "languages" has been raised at every talk by a modeller I attended. It has been duly noted. But no one claims to have all the answers, to know all the steps involved in language evolution. Now what is the best way to move on? Listen to Norbert and put these questions on ice for 150 years or try to take some more baby steps towards the still very distant goal?

      Regardless of how you answer my question, if the simplicity of the models disappoints you how much more dissatisfied must you be with the Chomskyan models of the biological language faculty. Do you think because these models are disappointing we should take a 150 year hiatus on generative grammar work?

      Delete
    52. @christina:

      riiiight....but the criticism isn't only that these probabilistic models fail, it's that there's a principled reason these models can't be the whole, or even most of the story. the question isn't: how does the mind pick amongst a pre-specified set of hypotheses (for lets say, a grammar) given some data, it's: where do the hypotheses come from?

      These models have nothing really to say about it other than appealing to a structure environment. Which we know is a problem since at least David Hume.

      This all goes without saying that even if these models succeeded in modelling language comprehension in some manner (notice spontaneous language production that is appropriate to circumstance but not determined by context is not even on the agenda) there's no justification to conclude that that is how humans produce and comprehend language. This goes for every other kind of thinking. the way a computer plays chess tells us about zero about how humans play chess. this is elementary stuff, really. failure to take it seriously is why the evolang stuff isn't really taken seriously outside the tabloids.

      Delete
    53. @Christina: Weren't we just talking about false dichotomies?

      I think there is a methodological lesson from the successes and failures of 80s style connectionism/PDP; what works well on toy examples may not scale to the full complexity of natural language.

      Delete
    54. @Alex: I am sorry I must miss something you want me to do/say. I have repeatedly said I AGREE with you that some work presented at Evolang was disappointing. How many times do I need to repeat it until you accept that I share some of your concerns? No one has said modeling provides all the answers. We are commenting on a post that celebrates the Hauser et al. paper - so if you do continue to bring up the issue [vs, mentioning it once and after it has been acknowledged move on to something else], what is your point if not support of Hauser et al. regarding modeling [I am aware you do not support them re other issues]?

      I am personally not convinced by all the work done by Simon Kirby BUT I think he is way too smart to have learned nothing from the lessons of the work you mention. And notice that the problem of language evolution is not the same as the problem of language acquisition or the problem of modelling full scale human language. At one point our distant ancestors had no language - we have a wide variety of massively complex languages. Our closest primate cousins have at very best some 'toy language' - even though they had as much time to evolve away from our LCA as we did. So why is there this huge difference? Modeling what MIGHT have happened at the very beginning of the transition from no language to human language seems one way to tackle the problem. By itself it probably won't succeed. But it is not the only thing language evolutionists do. To quote from what Mark says below [my apologies for inserting caps]: "[the work] is interdisciplinary, uses multiple methods, TRIES to interface with broader issues and TRIES to contribute to a cumulative science of language" Trying to do X is not the same as having succeeded in X. So maybe you can clarify why you continue to come back to the 'having not succeeded yet' part?

      Delete
  4. Once again a discussion between "linguistic" fields boils down to "what do we mean when we say the word 'language'"? One group views the study of language as "the study of the capacity for discrete infinity (with limits) in conceiving of the production of linear strings", and the other group(s?) thinks of it as a collection of other characteristics. The only (real) point of conflict seems to be that the latter group(s?) think that discrete infinity is not a Thing...

    ReplyDelete
  5. Turning the sociological observation on its head, how many language evolution people regularly go to NELS or WCCFL?

    ReplyDelete
  6. Norbert writes (I'm leaving out the all caps shouting):

    "Generative Grammar (and Postal IS a generative grammarian, indeed the discoverer of cross over phenomena) has made non trivial discoveries about how grammars are structured. [...] So, when asked about a result, there are tons to choose from, and we should make the other side confront these."

    I find the talk about "we" and "other side" sociologically interesting, but ultimately unappealing. As Alex Clark notes, there's a couple different senses in which one might define the sides here; that's a first problem. Also, for me as someone from Europe, it feels a bit, how shall I put it, old-fashioned to think in terms of "sides". I think younger generations of scientists are way more heterodox and more attuned to the necessary plurality of the language sciences than you may realise, or than may be apparent from some of the regular commenters on this blog. This goes both ways: new generations are aware of important advances of the past decades, but also of the affordances of new data, methods, and theories for pushing linguistics forward.

    This is why I quite like Charles Yang's and David Adger's recent work (just as I like Morten Christiansen's and Simon Kirby's work): it is interdisciplinary, uses multiple methods, tries to interface with broader issues and tries to contribute to a cumulative science of language. I don't think all the talk of "sides" and worries about people not acknowledging the value of certain discoveries contributes to that (hell, even Evans & Levinson acknowledge that broadly generativist approaches have made important contributions to our understanding of how grammars are structured).

    For the same reason, I think the Hauser et al review ultimately misses the mark: it notes that under a very limited conception of language, the evolution of one narrowly defined property may still be mysterious. That's okay (who could be against isolating specific properties for detailed investigation), but the slippage from language-as-discrete-infinity to language-in-all-its-aspects is jarring and misleading. "The mystery of the evolution of discrete infinity" might have been a more accurate title. Although clearly that would draw a smaller readership.

    ReplyDelete
  7. I would like to have little chatbots, all running the same computer code (implemented in less than 1000 lines of Prolog, or 10 lines of English) which will cooperate by talking to each other. They would all have access to a large database of text such as is provided by Google.

    Richard Mullins

    ReplyDelete