Tuesday, March 5, 2013

Going Postal

I have avoided writing this post for the last three weeks. I really didn’t want to do it and still really don’t want to do it. The reason is that I have reluctantly concluded that the position that I am going to discuss is silly. This position is a critique of the mentalist vision of Generative Grammar, the one that Chomsky has been sedulously developing over the last 50 years.  It flies under the flag of Platonism and its current most vigorous exponent is Paul Postal.  Nonetheless, the argument, to the degree that I understand it, and sadly I think that I do, is very poor. It lays not a glove on its Chomskian bête noir and were its advocates less vociferous I would have left it to die in the obscurity that it deserves. However, given the play that this position has received in the discussion threads it deserves some attention. 

Before proceeding, some caveats.

First, Paul Postal is, in my view, an excellent syntactician. I wish that I had made even a tenth of the contributions to syntax that he has made. Were I to follow my own advice (here), this would mean that I would think very hard before attributing to him a silly position. I will outline what I take the view to be below and I apologize in advance if I got it wrong. But I really don’t think I did as he has been very careful in laying it out. However, I want to make it clear that though I find the argument silly, I consider Postal himself to be an extremely talented and interesting linguist.

Second, I will not be attacking Platonism here, at least no much (well a little, at the end). In other words, my main concern is to examine the claim that the Chomsky position is incoherent, a claim that Postal and others (Hi C!) have repeatedly made. I will not be extensively discussing the virtues or vices of a Platonist conception.  I actually have little sympathy for this understanding of the project, but, hey, whatever floats your boat. My aim is not to convert the heathens, but to defend the realm.  Incoherence is a nasty vice. Were the biolinguistic-mentalist interpretation of linguistics truly incoherent this would be a serious problem. My only aim is to show that there is currently no argument that I can see in support of this conclusion. Assertions, yes. Arguments, no. 

Last point: I will concentrate entirely on one text, Paul’s Remarks on the foundations of linguistics (here).  I concentrate on this because it was aimed at a philosophical audience and is a concise version of the argument.  If there are other better arguments elsewhere, then I am sure that partisans of the truth will be happy to reproduce them in the comments section for us to examine.  I encourage you to do so if you feel there is a better argument out there.[1]  But please don’t simply refer to the existence of the arguments, actually give them. Produce the arguments.  Then we can see if they are better than the one that I deliver below.  Ok enough (cowardly) prologue.

The argument is actually very simple. It claims that the mentalist perspective rests on a confusion between Natural Language (NL) and knowledge of NL.  More elaborately

1.     A field is about the objects it studies
2.     Linguistics is about Natural Language (NL)
3.     Knowledge of NL is not NL
4.     Mentalism identifies the two.
5.     As a result, the mentalist conception of linguistics leads to incoherence

Here is another version, a reduction:

6.     Say that there is knowledge of NL
7.     Therefore, there is NL
8.     Therfore, for any specific NL, knowledge of that NL is distinct from that NL

What’s the basis of the inference from (6) to (7) and (8)?  It’s a consequence of the “general features of the knowledge relation, regardless of how ultimately analyzed.” In particular, “for any X, knowledge of X can only exist if X has existed or does exist. Consequently, the assumption shared by both foundational positions [Platonism and Mentalism/NH] under discussion that there is knowledge of NL entails that there is NL” (235).  As Postal emphasizes, the key critical observation is that the ‘knowledge-of’ or ‘know’ is a two-place predicate/relation. From this it follows that NL is distinct from knowledge of NL and so any attempt to analyze NLs as mental states must lead to incoherence.

That’s the argument. So far as I can tell, in its entirety. It rests on an analysis of the word ‘knowledge’ as a relational predicate and concludes that any identification of NL in terms of mental states must be incoherent.

Now, Postal is quite aware that Chomsky has rejected this analysis. He quotes Chomsky liberally as noting that he does not believe that language has an “objective existence apart from its mental representations” (235, note 5 quoting from Language and Mind). However, he will have none of this. Why? Because ‘knowledge’ is a two place relation and that’s that.

To belabor the point, let’s see why this is such a poor argument.  First, whatever the “logic” of the term ‘knowledge’ there is nothing incoherent in proposing that in the domain of linguistics we are going to reject the relational reading of the term.  It is perfectly coherent to suppose that when the object of know is NL then the predicate should be treated as one-place not 2.  This would treat knowledge of NL like taking a bath, or having a headache/hallucination, weighing 10 lbs, having three dimensions.  To say that Sam has a headache is not to postulate a relation between Sam and some headache. It is to ascribe to him a certain unpleasant mental state. Similarly with taking a bath (aka, bathing) and weighing 10 lbs or having 3 dimensions (aka, being 3 dimensional).  These look transitive grammatically, but they quite plausibly denote properties. Chomsky’s proposal is (and has been) that the same holds for locutions like knows French/English etc. 

First note, that if this is conceded, then the arguments above disintegrate. Indeed, if you allow this move, then there is nothing incoherent about analyzing knowledge of language as having/being in a certain mental state. On this well known view, knowing English is one state, knowing French another etc. UG will be the study of these possible states and how they arise in the minds/brains of their possessors. We can ask how these states enter into processing and production, how brains instantiate these states etc.  All the stuff that Chomsky and Co. have been saying forever turn out to be perfectly coherent, indeed anodyne.

Now one might object that this is not kosher, you can’t just analyze terms any way you want.  ‘Knowledge-of’ is a relational term so treating it as a property denoting expression in this context is illicit.  But, that’s just wrong.  Chomsky is providing a conception of a scientific project. In so doing, he is free to modify ordinary locutions as he sees fit in service of advancing the specific investigations at hand. In other words, the aim here is not to provide an ordinary language analysis of know but to provide methodological underpinning for a practice.  Chomsky’s proposal is that we understand know NL as ascribing a property of a person. In virtue of having that property that person is endowed with certain capacities. Indeed, having that property, being in the specified state, is how he proposes to analyze linguistic capacity (aka, competence). Being in a certain mental state endows me with the capacity to speak my idiolect. Being human endows me with the capacity to acquire linguistic states when put in the right environmental circumstances. None of this is mysterious. It may be wrong (NOT!!) but it is not close to being incoherent. Or, more relevantly, observing that in ordinary language know is a 2-place predicate is entirely beside the point.

Virtually all the other critiques offered rest on this one.  Postal provides an analysis of NL as an infinite collection of sentences and then lambastes Chomsky for not being able to provide a coherent understanding of NL sentences given his mentalism. Why? Because there are an infinite number of sentences but mental organs are bounded in both space and time and so cannot “be placed in one to one correspondence with infinitely many things (244).” Of course, there is nothing odd about mental states embodying finitely specifiable recursive procedures that can characterize the properties of an unbounded number of sentences. Indeed, the whole idea from the get go has been that knowledge of G amounts to embodying a finite specification of such recursive rules.  Postal is, or course, right, that in practice these mental states will never be used to produce or understand more than a fraction of those tokens that it could produce or understand. But this is just to say that we have an open ended grammatical potential, one bounded in various ways by the physical properties of the machine that embodies those states.  None of this is mysterious.  We have perfectly good models of how machines can embody recursive procedures whose application is limited by size of memory, CPU, energy allocations etc.  None of this is opaque and none leads to incoherence.  Chomsky has been very careful to emphasize the three I-s: intentional, individual, internal.  It is commonplace to finitely specify an infinite set of objects. Chomsky’s rejection of sentences as sets is not a serious problem.

There is more, but none of it is enlightening in my view. Before quitting let me cast a few beady eyes on the Platonist alternative that Postal endorses. It may not be incoherent, but to me it is deeply unattractive. Here are three quick reasons why I find it of dubious interest.

First, though Postal insists that he has nothing against the study of knowledge of language, he does want to insist that linguistics proper is a purely formal study not subject to the whims of empirical work.  As I mentioned in the last post, I find this position to be counterproductive. The aim should be to expose yourself to the greatest empirical buffeting.  Hiding one’s favorite views in a formal cloister insulated from potential contact with other kinds of scary facts cannot help but breed a kind of unfortunate intellectual insularity. 

Second, the Platonist conception leaves it entirely unclear how one is supposed to bring the study of NL into contact with the study of knowledge of NL. Sentences are abstract, have “no locus in time or space” and “can cause no reactions” nor “itself be caused by anything” (239).  If so, it remains a mystery of how it can be known. By insisting on the great gulf between what linguistics on this view studies and what can be studied empirically, we are left with no idea why it is that linguistics should concern anyone with cognitive interests. Of course, these Platonist views can be modulated.  Plato himself had a Platonist program quite congenial to mentalist investigation precisely because he grounded the perception of forms in pre-existing innate knowledge. But, on this view, studying the mental could be a way of studying the forms.  Moreover, for Plato physical particulars were reflections of the forms, albeit distorted ones (remember the cave analogy).  In other words, Plato went out of his way to attenuate the ontological distance between his forms and the empirical and mental world.  Postal goes out of his way to emphasize the great divide and so leaves it entirely unclear why anyone interested in mentalist or biolinguistic sorts of concerns should care about what he is working on. As it turns out, Postal’s metaphysical sympathies don’t really affect his linguistic conclusions and so as a consumer I can happily understand his interesting syntactic proposals and use them as I see fit.  In other words, I can ignore the Platonism as it is effectively idle.

Third, the idea that the practice of linguistics looks like anything done in a math department is heroic.  Most of what linguists do is go around asking other native speakers for judgments.  Last I checked this is not the practice of working mathematicians.  Recall the joke: Linguist doing number theory: Thesis: All odd numbers are prime. 3 is prime, 5 is prime, 7 is prime, 9 is prime. Hmm 9 is prime?, Can you get 9 as a prime?  I think I can. From where I sit, the practice of syntax has virtually no resemblance to what takes place in math departments. We truck in analyses, they in proofs. We do some “modeling,” and describe empirical phenomena in semi formal ways, but to take this as serious math, well, I would advise against submitting this to JAMA.

Forth, if one is a serious Platonist about NL then the following becomes a serious option: normal native speakers of an NL may not know their own NL.  As Postal has correctly noted, a native speaker is exposed to at most a miniscule subset of the linguistic objects of an NL.  And given that we don’t identify an NL with a mental state of a native speaker, there is no reason to think that native speakers in general do know their NLs.  I find this view to be, ahem, odd.  If native speakers don’t know their own NL then nobody knows anything. But for a Platonist the fact that many/most/all native speakers know their NL in virtue of being competent speakers of their NLs should be a bit of a surprise.[2]

None of this implies that Platonism is “incoherent.” In matters methodological I am a pluralist.  Postal is not similarly catholic. He has claimed that Chomskyan mentalism is incoherent. From what I can tell, he is just plain wrong.

I will end here.  I cannot recommend that you take the time to read Postal’s Platonic disquisitions. I learned little of interest from the one paper I read.  Moreover, the reading was deeply unpleasant. Postal is clearly a very angry man.  His discussions are crude and, in my view, deeply offensive.  I believe that he should apologize for the way that he has conducted himself. He has proven to me that Chomsky is actually a nicer man than I already believed he was.  Only that explains why he has refrained from addressing such weak arguments made with so little decorum.

[1] There has been some discussion of apparent problems in the threads to the Fodor posts. Christine has mooted several and Avery, Andy and Jan have delivered significant push back.  To my mind, they have easily gotten the better of the discussion, but that is for you to judge.
[2] It’s instructive to contrast this with mathematical knowledge, which is indeed opaque to most humans.


  1. I think a possible opening that Chomsky might have left for the Platonists is an unclarity (to my reading for example of KLT from 1986) in exactly what i-language is: a parameter setting representing somebody's acquired knowledge of a language, representable as a finite vector according to the mid-80's P&P doctrine (ignoring the lexicon), or the infinite structure-set produced by this setting according to some account of UG, or something completely different, which I have so far failed to understand?

    There is in fact a sense in which I would accept Platonism, probably not one that Postal or Christina would accept; it's that, since (some concept of) e-language is the medium of transmission for i-language, we need a concept of e-language, which will be something induced by i-language and the environment, and to understand how it manages to transmit i-language, we'll need to analyse it mathematically. PPA hoped to trivialize this problem by having the internal representation of an i-language be a smallish finite vector, but this no longer looks plausible to many people, & I think we need something (a lot) more sophisticated.

  2. &, of the three arguments at the beginning of the Katz & Postal paper that Christina linked to, I only took on the 'Veil of Ignorance' argument as manifested in infinitism, due to feeling the most confident in my ability to deal with it; there are two more to go. The 'Necessity Argument' (#2) involves interfacing syntax to logic so as to get a coherent concept of linguistics including semantics, which strikes me as potentially much harder to deal with, in part due to the time depth and complexity that has grown up around the arguments concerning the status of entailment and other 'Meaning-Based Properties and Relations' (my preferred rerendering of Larson & Segal's term 'Logico-Semantic Properties and Relations). K&P assert for example that 'psychologism' about logic has been discarded by almost everybody, so we have to discard it for syntax in order to bolt the two together in the manner we would like to accomplish. I don't really believe this claim, but neither do I think I know how to refute it (yet).

    1. @ Avery: I had drafted this last night since you had asked about the veil of ignorance argument and might as well post it here. It ONLY deals with this one argument [not the strongest in my books but not obviously wrong either]

      I admit the ‘Veil of Ignorance Argument’ is not my favourite but here is what seems to be at issue. Briefly ‘behind the veil of ignorance you look at all the properties of language you know at this time and decide then based on that knowledge what is the most likely mechanism that could account for/underwrite these properties. Chomsky decided when he was in this position that the human brain must contain a language faculty/organ [LF] that generates language. He made this decision before he knew the specific properties of the brain that might be involved in this organ.

      So on Chomsky’s account NLs and knowledge of NLs are the same thing. Hence he is committed to an account of NLs, which faithfully reflects whatever actual human linguistic knowledge turns out to be. Now here is the problem: at one point empirical research may reveal that the human language faculty cannot generate any English sentence token that is longer than S. So S would be the ‘longest sentence’ of English. But this clashes with one of the assumptions that went into the thinking on which Chomsky [when behind the veil of ignorance] based his decision to postulate the LF, namely that there is no S for any NL. So he has a problem on his hands. Is it insurmountable? That depends.

      It seems accepted by most generativists [certainly by Chomsky himself] that there is no S for English. For example a standard induction from the basic grammatical evidence about, say, coordination, projects the regularity that, for a sentence of any length, there is a longer one, formed by conjoining it with another under appropriate structuralconstraints. There are, for example, fully grammatical structures of which S is a proper part, e.g., 'Snow is white and S'. Therefore, the inductive conclusion that there is no longest English sentence is inconsistent with the empirical finding that S is the longest English sentence.

      You may think: fine that is no big deal so we postulate the longest sentence is S-4 [for the 4 elements in ‘Snow is white]. But this will not work because you could conjoin S with “This snow is of the most beautiful white I have seen in the last twenty years” – so we would be at S-16. In fact it would be impossible to predict the length of S before we know the relevant facts about human brains. But our current knowledge of English suggests there is no longest sentence S. Now you might be willing to bite the bullet and say the human brain determines in fact the maximal length of a possible sentence of NLs. Any sentence longer than this is not part of an NL.

      Imagine a slightly different case: we build a computer that has a greater storage capacity than a human brain [say twice as big]. This seems at least possible. Now a human brain can only generate S but this computer can also generate “Snow is white and S” and “This snow is of the most beautiful white I have seen in the last twenty years” and countless others. Should we say that all these sentences S+n [for any n<S if the computer has twice the human brain capacity] are part of English? I do not know about your intuitions but I would find it very odd to exclude them.

    2. I think the Benacerraf style epistemological arguments against platonism, that Norbert alludes too, are very strong. Do you know anywhere, Christina, where these arguments are addressed?

    3. @ Alex C.: Yes, Katz addresses he Benacerraf style epistemological arguments specifically in several of his books. I had to return my books to the library but if you can wait a bit I can get you more specific references

    4. I’ve found Postal’s elaboration of the comparison of natural language to logic and mathematics both interesting and illuminating. He emphasizes that the truths of logic and mathematics are independent of people’s intuitions, they’re imposed on them from the outside thus having a prescriptive power. Therefore they can’t be interpreted as related to (human) biology. Now if natural language (NL) is supposed to be of the same kind, it has to have a prescriptive power, too, and can’t be related to biology, either. From this perspective, of course, the focus on data from both speakers’ judgements and corpora, quite common in linguistics of all sorts, is flawed for NL can't be related to such data, rather it is what most knowledgeable and respectable grammarians claim it is.

      Note that this view of language is more or less compatible with folk linguistics, the belief that what we have acquired is a kind of corrupted version of the correct language prescribed by grammarians and stored on the dusty shelves of libraries.

      Katz refers to Benacerraf in “The Unfinished Chomskyan Revolution”, Mind & Language 1996, 11, 270-294 (available on the Web). He says basically what I've described above, of course, in a much more sophisticated way than I do (and in terms of language vs. knowledge of language).

    5. @Christina, in your answer to me, I really don't understand the last two paragraphs - I don't want to say that there's a maximum sentence length, so why should I worry about problems for people who are trying to say that? And, in par 3, it might indeed be the case that empirical research will reveal that there is a longest sentence length in English, but drawing wrong conclusions from limited data is an inevitable hazard of empirical inquiry.

      & it appears to me to be false that Chomsky (at any stage from LSLT to his more recent papers), required that there be no maximum sentence length for any NL, since there is no deep principle in the theory that make it impossible for there to be parameters that might shut off the recursions at some point (and in earlier theories, there might happen to be no generalized transformations, or no combinations of PS rules that could apply recursively. That is pretty much what Pesetsky et al pointed out in the first paragraph or so of their article about Everett 2005, but then they went on to blur the focus and spoil the effect by trying to show that Everett was wrong about everything.

    6. Right. Note that the last paragraphs dealt with how someone MIGHT respond [so my 'you' was not directed personally at you - hazard of a German [that does not have this ambiguity] using English]. It was not part of the argument but my suggestion how someone might reply to it.

      I am not aware that Chomsky has claimed that there IS a maximum sentence length nor where he has specified any procedure for determining one. But even though i have read most of what he wrote i could have overlooked something. So if you can find a text where he spefifies such a procedure i would love to read it.

    7. @Avery: here is one citation regarding the 'no longest sentence' claim by Chomsky:

      "there is no longest sentence (any candidate sentence can be trumped by, for example, embedding it in ‘Mary thinks that. . .’), and there is no non-arbitrary upper bound to sentence length . (Hauser, Chomsky, Fitch, 2002, p. 1571)

      Again, in my books the language here is very clear. And given this passage, it would seem that there are many many sentences [of quite finite length] that cannot be generated by an actual human brain because any potentially longest sentence that ever has been produced can be trumped by, for example, embedding it in ‘Mary thinks that. . .’. For that reason it would seem that, pace what is claimed, for actual human brains there IS an upper bound to sentence length [determined by properties of the brain and, if i recall it right some have suggested how one could calculate such bounds]

      So even though you can argue that the 'no longest sentence claim' does not require sentences to have infinite length, given the actual limitations of biological brains many of the very long possible sentences can neither be generated not stored by human brains. So it would seem to follow that based on the above quote language cannot be a biological organ.

    8. There is arguably no longest sentence in English, but that doesn't mean that there is no longest sentence in Piraha - from what Dan says, there is, & his account looks more believable to me than the alternatives ATM (but there are a number of people around the ANU who want that data to appear!)

      "And given this passage, it would seem that there are many many sentences [of quite finite length] that cannot be generated by an actual human brain because any potentially longest sentence that ever has been produced can be trumped by, for example, embedding it in ‘Mary thinks that. . .’."

      I'm really missing something here. How can the possibility that a potential candidate for longest sentence be trumped prevent a human brain from generating anything at all, including some implausibly long sentence. Mortality, boredom & the heat death of the universe are the obstacles to producing very long sentences (unless you think that the entire structure has be represented as a brain state at some single point in time).

    9. In which case, finite brain size is an additional limit. But that is a bit flexible too - for example people can memorize and recite with understanding things that are much longer (and better!) than they could possibly produce by themselves, especially spontaneously. Such as the famous Schliemann who supposedly learned English by memorizing Paradise Lost (I used to be able to book II). So even if you think that finite brain size is relevant, it's still a squish limit which needs to be handled differently from the fact that you can't say *der beliebte von allen Dichter in German (but could to its word-for-word translation in Greek of all eras).

  3. Norbert, you create an opposition between knowledge as a state of the mind/brain (one-place) and knowledge as an individual's relation to something external (two-place). However, all cases of knowledge I can think of involve both. Thus, if you know the text of "America, the Beautiful", you are in a --no doubt blissful-- mental state, but this state nevertheless relates you to something external to your brain, namely a text belonging to a shared culture and written before your birth. It would be silly (to use your term) to see "America, the Beautiful" exclusively as a state of Norbert's mind.

    I don't see how it is different in the case of language. Thus, the average American college student knows perhaps 50,000 words, plus a multitude of expressions and idioms. Like "America, the Beautiful", most of this is shared heritage, existing long before the individual speaker of English was born. Last but not least, somebody who knows English also knows how to make new, complex expressions of the inherited words, aka sentences. In building sentences, the individual is trying to follow a norm that is perhaps partially determined by innate factors but is ultimately sanctioned by the community, hence is external to the individual mind. In short, language has a vast external dimension, and to the extent that language involves states of mind, these are relational with respect to the external objects in question.

    The lexicon with its external dimensions is the nemesis of linguistic internalism. It cannot be dismissed by relegating it to some FLB (faculty of language in the broad sense) as apposed to the real stuff ("recursive Merge"). Subtract the latter and you still have something language-like (think of traffic signs). Subtract the lexicon, and you end up with empty hands, linguistically speaking.

    From a different angle, it is also impossible to reduce knowledge to a state of the brain (Chomsky assumes some form of mind/brain identity). Knowledge is not the priced possession of brains but of persons. Our brains are only tools and, as such, necessary conditions for knowledge at best. Without taking the owner and user of the brain into account, the brain only contains knowledge in the same metaphorical sense as the way libraries contain knowledge of the universe or thermostats have knowledge of temperature.

    All in all, the idea that knowledge of language is a state of the mind/brain, ignores both a crucial external dimension and an equally crucial internal dimension (the source of intentionality). Lack of understanding of the source of intentionality so far precludes a naturalistic understanding of the mind.

    1. My favorite example of the external aspect of the lexicon is that I know that my house wiring has 'three phase power', because an electrician told me so, & this is useful information, in spite of the fact that I haven't bothered to figure out what this phrase really means, because I can tell it to other electricians before they come to work on the house, which is apparently sometimes helpful to them. It's a standard sort of Tyler Burge (iirc) point, & one of the reasons I don't bother to find out what three phase power is is to preserve it as a real example of the 'linguistic division of labor' that I can tell students about without making anything up.

      But it has very little to do with syntax, and I don't think it even applies to many parts of the lexcion.

  4. I would like to take up Norbert's challenge [to provide more convincing arguments than the one he discusses] under 2 caveats:

    1. You need a clear definition of the Language faculty. I do not ask for specific brain structures unless any are known but you need to be specific whether Merge generates sets, what the ontological status of these sets is, whether there is an upper limit on sentence length, where the lexicon is located [in the brain or external to it] etc. No reference to one of the vague publications by Chomsky but something nice and crisp as the 5 point summary of Postal's argument above.

    2. Any personal attacks have to stop. Anyone, but especially a philosopher, knows that it is entirely irrelevant to the quality of an argument whether the arguer is a Saint or the 'devil'. If you have legitimate objections to Postal's arguments cite him and show why his argument is wrong. Saying "Postal is clearly a very angry man" without providing any evidence for such anger is as unprofessional as "His discussions are crude" without providing evidence.
    Finally, it says a lot [insert positive valence!] about you that you think "Chomsky is actually a nicer man than I already believed he was". But this is irrelevant. Even a male version of Mother Theresa can make mistakes in an argument. And these are at issue not the character of either Postal or Chomsky [or anyone else who participates in this discussion]

    If you do not want to agree to the above i will merely address some weaknesses in the argument you made above.

    1. C do whatever you want. Others appear to benefit from your interventions. I have not. If you feel like making the arguments, do so. If not, not. This is entirely up to you. If you feel that you can get better coverage someplace else, feel free to leave this blog. Start one of your own. I will leave it to others to engage with you in the future. I have reached my limit. Good luck.

    2. Thank you. I conclude from this that there is currently no precise definition of Chomsky's view that can withstand the incoherence challenge. This would be consistent with what Chomsky himself expressed in 'Science of Language'

      I will address below a couple of errors in your discussion of Postal 2003 [Remarks on the foundations of linguistics] because I think it is important to understand what was and was not the target of his criticism. As certainly has become very clear from these discussions is that there is a lot of confusion about what the ontological status of sentences [or expressions] of natural language is. And i think proponents of any view would benefit from eliminating as much as possible of this confusion. But this is my personal opinion, you certainly do not have to agree with it.

      As for 'getting more coverage someplace else' - you again misunderstand my motivation. I am 'here' to learn about the biolinguistic view. If anyone can provide a coherent defence of it, I would imagine it to be you and those you have listed as contributors. Unlike you, I actually have benefitted from the exchanges and am grateful for them.

    3. Given Norbert's refusal to offer a definition for LF i will for future discussion offer the following definition, taken from Chomsky's 'Approaching UG from Below'

      "An I-language is a computational system that generates infinitely many internal expressions, each of which can be regarded as an array of instructions to the interface systems, sensorimotor (SM) and conceptual- intentional (CI)." [Chomsky, 2007, p. 5]

      The language here is very crisp: The I-language generates infinitely many internal expressions. I claim that IF the i-language is a part of the human brain then this definition by Chomsky is incoherent because it requires a finite physical object [part of the brain] to generate INFINITELY MANY internal expressions [presumably brain states]

      In the same publication Chomsky writes:

      "In its most elementary form, a generative system is based on an operation that takes structures already formed and combines them into a new structure. Call it Merge. Operating without bounds, Merge yields a discrete infinity of structured expressions. Hence Merge, and the condition that it can apply without bound, fall within UG."

      Again, no hypothetical but a clear statement that Merge yields a discrete infinity of structured expressions. And if the ontological status of these structured expressions is 'concrete object' again it seems impossible that these could be 'yielded' by a any finite biological organ.

      This BTW is not 'lambasting Chomsky' but taking seriously what he writes.

    4. An assumption here seems to me to be that the finite brain must contain at any one time all of any sentence structure that it produces, & the Merge story certainly seems to suggest that, but it's not a general requirement of there being some kind of 'i-language'. For example in my bizarre religion case, people just recite an extremely long sentence, producing the words as they go, no infinitely large structure in anyone's brain required.

      The idea of 'phases' seems to me to be at least partially intended to minimize the amount of structure the speaker/hearer has to actually maintain at any given point in time in order to produce complex structures.

      & the ambiguity about 'i-language' as internal representation of the grammar vs the infinite set that it is supposed to determine for many languages seems to me to still be murky in this discussion (and, while we're at it, sorting out Aspects-style 'competence' might be worthwhile; I would take it to definitely be 'idealized performance' e.g. what the grammar produces without memory limitations etc, as an infinite set/collection, but I'm not sure everybody would agree with that.

    5. The first reply to your thoughts is: If Chomsky did not really mean what he clearly said why did he not simply SAY what he means? This quote is not taken from a set of interviews but from a fairly technical book. So it would not be an 'off the cuff remark' but part of 'serious theorizing'. Note thought that is IS consistent with remarks made in for example 'Science of Language': "You got an operation that enables you to take mental objects [or concepts of some sort], already constructed, and make bigger mental objects out of them. That's Merge. As soon as you have that, you have an infinite variety of hierarchically structured expressions [and thoughts] available to you". [Chomsky 2012, p. 14]

      And you will find quotes like that in many of Chomsky's publications. Why is he always talking of 'infinity' vs. 'very large by finite', if he always means the latter? He knows this has not only the potential to confuse but already has confused people [e.g. Katz and Postal]. Assume Norbert is right and Chomsky thinks Postal's arguments are too weak to be addressed in public. Would this utterly nice guy not go out of his way to make sure that no one else gets as confused as Postal and make it absolutely clear that he means 'very large but finite'?

      I suggest Chomsky is not trying to confuse or mislead in the quotes i cite. I think he says there exactly what he means, what he must mean so he can account for what he calls 'creativity'. If his view is limited to 'very large but finite' it boils down to a sophisticated nominalism. Remember one of the reasons motivating the Chomskyan revolution in the 1950s was realizing nominalism cannot, even in principle, account for linguistic creativity. If it can, someone like Tomasello can get there no sweat.

      A final point: I notice some people seem to think 'very large but finite' and 'infinite' is almost the same. Like say 999,999,999 is almost 1,000,000,000. It's not. On a more than charitable picture it's more like 0.0000000000000009 vs. But even this is very misleading because you can get from the former to the latter by adding more of the same. You can't get from finite [no matter how large] to infinity this way.

      Now take another look at the passage Postal has cited;
      In the work that I've done since The Logical Structure of Linguistic Theory - which just assumes set theory - I would think that in a biolinguistic framework you have to explain what that means. We don't have sets in our heads. So you have to know that when we develop a theory about our thinking, about our computation, internal processing and so on in terms of sets, that it's going to have to be translated into some terms that are neurologically realizable.... if we want a productive theory-constructive [effort], we're going to have to relax our stringent criteria and accept things that we know don't make any sense, and hope that some day somebody will make some sense out of them - like sets" [Chomsky, 2012, p. 91]

      You can be more charitable than Postal if you like, and interpret 'we know don't make any sense' as 'we don't know YET how this could make sense, but someday we will'. But if Chomsky truly believes 'very large but finite' is enough, then why would he 'accept things that we know don't make sense...[like sets]' at all and not eliminate them from his productive theory constructive effort?

    6. So as to why Chomsky always talks about infinity -- I think this is partly because of the way that language theoretic complexity (the Chomsky hierarchy) was conceived of in the period say 1956-1980.
      i.e. languages being finite, regular, context-free and so on.
      I think from a more modern perspective, one might want to talk about having compact representations of various types and polynomial reductions and so on. But this is not something Chomsky cares about (even if he knows about it), and so he continues to use this slightly dated terminology, which nonetheless for me at least is perfectly clear and reasonable.

    7. I do not want to tell you how YOU should talk about a view you hold. But I would be very careful attributing to Chomsky anything he has not explicitly said. There is a long list of people who have done this and Chomsky's reaction never suggested he was amused by people attributing to him something he did not say. For this reason I stick to what he explicitly says, especially something that has been consistent over a very long period of time. It also does not seem to make sense to have something like you gesture at in mind and explicitly talk about the difficulty of 'making sense of sets' - on your own view there seems to be no such need.

      BTW, I am not entirely sure what you mean by 'representations of various types'? Do you mean types [e.g. abstract objects] being represented or do you mean different types of representations?

    8. I mean for example things like regular grammars and deterministic finite automata. So from one point of view these two distinct formalisms are of the same power (because they define the regular languages). But from a modern point of view, the regular grammars are much more powerful.
      Similarly acyclic context-free grammars can only define finite languages, but are much more compact and thus powerful than a simple list of sentences.
      In 1956, the vocabulary did not really exist to make this distinction, and so instead people (as they still do now) make the distinction in terms of weak generative power. Which makes the finiteness/infiniteness point critically important.

    9. Christina writes: "this definition by Chomsky is incoherent because it requires a finite physical object [part of the brain] to generate INFINITELY MANY internal expressions [presumably brain states]".

      I think (the finite physical embodiment of) the grammar generates infinitely many expressions here in roughly the same sense as we might say that the motor system generates infinitely many configurations of positions of our arms and/or legs. There are infinitely many positions your arm or leg might be in, because (let's assume) there's a continuous space of, say, angles which your shoulder or knee or elbow joints might be at. Is there any conflict here between the fact that the motor system is finitely embodied and the fact that the space of possible configurations is infinite?

      I can't see any such conflict, and similarly I don't see any between the fact that the language faculty must be finitely embodied and the fact that there are infinitely many expressions that are within its generative capacity. Although the set of expressions licensed by a particular I-language/grammar is infinite, that set is not one that is generated by merge (just as the set of all leg configurations is not a set that is generated by the motor system).

    10. @Tim: I am not sure you are serious. i hope not but in case you are, here's a problem for your analogy. Assume, as you say, "the motor system generates infinitely many configurations of positions of our arms and/or legs". Now say you want to move your arm from A to B. Because there is a continuous space between A and B your arm first has to move half way from A to B [1/2 AB]. But before that it has to move [1/4 AB] and [1/8 AB] before that and. ... well you get the picture: welcome to the motionless world of Zeno's paradoxes: your arm can never move anywhere because in order to get there it has always first to move over a smaller segment of the continuous space. You have now 2 choices: [A] you can say movement is just an illusion, my model is correct. [B] you can accept that the model you use is not a good model for arm-movement and find a better one. For me the choice of is easy: if there is good evidence that the model conflicts with what i can observe every day I abandon the model no matter how beautiful the mathematics of it are.

      Now lets assume you remain unconvinced. What reason do you have to believe that the movement of your arm is in any relevant sense similar to your knowledge of language? Would a grammar of arm movement that generates the potential positions you mention be in any relevant way similar to the grammar of English? I have no idea what your intuitions are but for me there is no similarity here that i consider helpful.

      The most obvious difference is that for any calculation of arm movement i do not need to know every point of the continuum. If i know the trajectory is a straight line i only need to know 2 point, etc. But in language I cannot 'calculate' new sentences based on those i already have. If I could we would have no creativity [and no 'poverty of the stimulus argument would be convincing].
      Take one of Chomsky's favourite examples:
      [1] John is eager to please
      [2] John is easy to please
      [1] and [2] are not like two points very close to each other on the arm-movement continuum but sentences with quite distinct structure. So just having a representation of one won't give you one of the other. And these are just very simple examples. So you actually need all the representations or you knowledge has 'holes' in it.

      Now people are sometimes mislead by Chomsky's shorthand description of Merge, the operation at the heart of grammar, being ver simple, just taking one object and adding it to another object. this may sound as simple as moving your arm from A to B. But have a look at the diagrams of people who actually work at this stuff. And most of these are for fairly simple sentences [when compared to potentially possible sentences]. So you have to take a bit more seriously what Chomsky says in his definition, especially the part I put in CAPS:

      An I-language is a computational system that generates infinitely many internal expressions, EACH OF WHICH CAN BE REGARDED AS AN ARRAY OF INSTRUCTIONS
      to the interface systems, sensorimotor (SM) and conceptual- intentional (CI)." [Chomsky, 2007, p. 5]

      So these expressions are complex beasts not extensionless points on a continuous line. Now if you need all of them to have an I-language and there are infinitely many of them....

      Last minor point: sets are abstract objects whether they are constituted of a finite or infinite numer of elements. So biological organs cannot generate any sets of grammars or leg configurations.

    11. You point out some differences between the language system and the motor system: one generates elements of a discrete space, one an infinite space; one generates structured arrays of instructions, the other "unstructured points"; no doubt there are many others. (In fact I think the second of these differences could be argued to be illusory, because the motor faculty of the brain presumably generates instructions for certain other parts of the body roughly as a grammar does, and the unstructured points in arm-and-leg-space correspond more to something like utterances than expressions. But it doesn't matter.)

      Yes, of course, the two things differ in many ways. But the answer to your question "What reason do you have to believe that the movement of your arm is in any relevant sense similar to your knowledge of language?", is that in each case the set of all generable things is infinite. You suggested that it was incoherent to suppose that there are infinitely many expressions/instructions generated by a finitely-embodied I-language. Is it also incoherent to suppose that there are infinitely many expressions/instructions generated by a finitely-embodied motor faculty?

    12. @Alex: I certainly understand your motivation. If there is a way to make Chomsky's statements coherent then we should assume this is the one he proposes. But as i said for Chomsky it seems crucially important that you do not change anything of what he actually said. Here's an example from his "Symposium on Margaret Boden"

      "The "last gasp" was followed by "Chomsky's theory of minimalism," which she dismisses as "risible" (on the basis of a quote from a hostile critic). It cannot be "risible," because it does not exist. As has been explained ad nauseam, there is no "theory of minimalism."

      I imagine that Boden thought in the context she was using it the term 'Theory of minimalism' was an acceptable substitute for 'Minimalist Program". But quite obviously it was not. it was not a minor issue that would still have allowed her reader to identify what she was referring to. No, using 'theory' instead of 'program' turned something Chomsky was instrumental in establishing [MP] into something that does not exist [MT]. Chomsky specifically cites this case as one example where reference to his work is "fanciful, sometimes even bringing to mind Pauli's famous observation "not even wrong."

      Now given that I would have thought Boden's substitution was inconsequential I am quite sure I would be equally wrong about any substitution I might use for "a computational system that generates infinitely many internal expressions". Chomsky is an excellent linguist and if there would have been a better way to express what he intended to say i am very confident he would have used it

      @ Tim: You ask: "Is it also incoherent to suppose that there are infinitely many expressions/instructions generated by a finitely-embodied motor faculty?"

      Yes, on my view that is incoherent. The generation of any individual instruction would take time. Hence it would take an infinite amount of time to generate them which is impossible for a finitely embodied motor faculty.

    13. Christina, I am not interested in attacking or defending Chomsky here. Sometimes he says very sensible things, and sometimes, like all of us, he says less sensible things. Pretty much anytime he says anything it seems there is someone sitting in his office with a tape recorder, so much more of what he says makes it into print than it does with say, you or me, where things go through various filters up to and including peer review....
      If everything I said was written down and published you could argue that I was incoherent, and intemperate etc.
      and I am, but not more than anyone else.
      Can we focus on the issues here, and maybe pay more attention to the more polished and well-thought out stuff, and less to e.g. his flame of Boden, which would have been better as an email than a publication.

    14. Christina writes: "Yes, on my view that is incoherent. The generation of any individual instruction would take time. Hence it would take an infinite amount of time to generate them which is impossible for a finitely embodied motor faculty."

      It seems to me, then, that the apparent incoherence stems (at least in part) from a misunderstanding of the way the word "generate" is used. I think the intended meaning is effectively unchanged if we use the term "license" instead of "generate": is it coherent to suppose that there are infinitely many expressions/instructions licensed by a finitely-embodied motor faculty or language faculty?

      If we consider the following context-free grammar (it may be worth noting that we are way back to Linguistics 101 stuff here), it is relatively standard to say that there are infinitely many strings that are licensed/generated by this finite grammar. Is this coherent?
      S -> a
      S -> a S

      And as far as I can tell, the use of a context-free grammar to illustrate the intended notion of "generate" here doesn't commit us to any particular assumptions about how similar the strings generated by such a grammar are to the expressions/instructions generated by an I-language (or by a motor faculty). So for example, dissimilarities between a context-free grammar and merge (or the instructions of a motor faculty) do not affect this point.

    15. @Alex: You misunderstood the reference to Boden. it was not intended to criticize Chomsky for a 'flame' but to make the point that he is very particular about how people interpret what he writes. And I think the point he makes is legitimate: if he says X people should not attribute Y to him. How he expressed that is irrelevant.

      Now for the passage we had been discussing, i took it from 'Approaching UG from below' because it is [i] fairly recent, [ii] from an academic publication [not some interview], [iii] consistent with what he wrote elsewhere all the way from the 1950 til 2012. Until fairly recently i would have said there was a difference in how he defined a grammar in the 1950s [as a formal procedure specifying strings that belong to a language etc. etc.] and now [as a biological organ generating an unbounded set of expressions]. I would have also said the shift occurred some time in the early 1970s and was well documented by 1985 when he began using the term I-language. However, in Science of Language he writes "Ever since this business began in the early fifties - two or three students, Eric Lenneberg, me, Morris Halle, apparently nobody else - the topic we were interested in was, how could you work this into biology?" and is other recent publications he said similar things. Also in one blog Norbert explained to me that he thinks there has never been a major shift in Chomsky's approach and essentially it was biolinguistics all the way back. Now if this is so, then it probably would make sense to assume that the way he uses 'infinity' has remained consistent as well. In other words, i find it more difficult than you to motivate the conceptual shift you seem to suggest.

    16. "An I-language is a computational system that generates infinitely many internal expressions, EACH OF WHICH CAN BE REGARDED AS AN ARRAY OF INSTRUCTIONS
      to the interface systems, sensorimotor (SM) and conceptual- intentional (CI)." [Chomsky, 2007, p. 5]"

      I have difficulty with this too, I don't think it is literally incoherent, but it is unsatisfactory for me because I think of a 'mechanism' as something that could be usefully described (at a highish level of abstraction) by something like a computer program, doing specific things at specific times, but there are too many ways of connecting acquired rules/parameters (what the LAD picks up from the environment) to idealized descriptions of possible behavior (including very low probability performances such as my 'proton incantation' from above'. For example some of the possible mechanisms would require complete structures to be retained, others might not. So, if you could make a specific proposal (at Marr's algorithmic level iirc), and produce evidence that it was neurally implemented in people's brains, then you could describe the proposal as a 'computational system'. Hence my repetitive grumblings about distinguishing grammars from notations of what is learned from the infinite collections that they are supposed to generate.

      It is perhaps not a good idea to focus on recent Chomsky writings in a discussion of K&P's Platonism, because they seem to me to attacking a much broader range of ideas, including ones I consider quite sensible, including my own.

    17. "about distinguishing grammars from notations of what is learned from the infinite collections that they are supposed to generate." oops, 'grammars *as* notations ...'

    18. I am not suggesting that Chomsky has made the conceptual shift I mentioned. On the contrary I think he has not. I am saying that *you* should make this conceptual shift.

    19. This comment has been removed by the author.

    20. @Tim: you ask: "is it coherent to suppose that there are infinitely many expressions/instructions licensed by a finitely-embodied motor faculty or language faculty?"

      Before we are getting any farther into redefining terminology [and possibly continue to talk past each other] lets be clear what the ontological status is of the expressions and instructions. Are they biological 'things' [I do not really care about where in the brain they are located say] or are they abstract objects that do not exist in time and space. Or are they in a third ontological category? If so in which?

    21. Christina:
      ["@ Tim: You ask: "Is it also incoherent to suppose that there are infinitely many expressions/instructions generated by a finitely-embodied motor faculty?"

      Yes, on my view that is incoherent. The generation of any individual instruction would take time. Hence it would take an infinite amount of time to generate them which is impossible for a finitely embodied motor faculty."]

      Is this really implying that some particular person needs to perform all of the structures allowed by their grammar in order to legitimately say that it 'produces' them? That's not in accord with my understanding of either the verb 'produce' in this context, nor the generic use of the present tense in English.

      Re the comment above, I think the overt sentence forms are *descriptions* of performances, and some attendant dispositions that people have to react to them. Maybe a bit platonic, but no more so than a herpetologist defining a lizard species in terms of the number of scales found along its lower jawline.

    22. For me the situation is pretty straight forward: If on Chomsky's view language is a biological organ, it is a physical object and as such finite. It also is the case that anything generated/licensed/stored etc. etc. by a physical organ has to be physical. So it cannot be infinite. Therefore I think it would not be wise to have a model involving e.g. infinitely many expressions for such a system [even though we might be able to conceptualise such a model].

      Things change if there is a difference between language and knowledge of language. If the latter is acquired by/stored in a physical brain, then talk about the former consisting of infinitely many expressions is no problem. What is stored in our brains and used to say generate expressions even can be in some 1:1 relationship to some part of the language. On this view it also would not be the case that some particular person needs to perform all of the structures allowed by their grammar in order to legitimately say that it 'produces' them. Maybe this is the kind of view Chomsky SHOULD have but i doubt it is the view he has.

      Now maybe I can ask you guys something. To me it is puzzling why people would even look at analogies like the one Tim provided when the aim is to learn about language. I do not mean this personal in any way, I am equally puzzled when i read Chomsky writing about comet trajectories or nematode-neurons or research on E. coli. To me none of this seems to have the slightest promise to shed any light on the kinds of questions David told me are at the heart of the disputes between say biolinguistics and 'empiricists' like Tomasello. Why do we care whether there is some similarity the strings generated by an I-language vs. by a motor faculty? If we care because we do want to address the tension between finite biological organ and [potentially] infinite generative power, then why are we willing to look at virtually everything [no matter how far removed from actual language it seems to be] with one exception: Platonism? I am not saying everyone should BECOME a Platonist [i am not one myself]. But what is the harm in looking at it?

    23. Christina writes: "lets be clear what the ontological status is of the expressions and instructions. Are they biological 'things' [I do not really care about where in the brain they are located say] or are they abstract objects that do not exist in time and space. Or are they in a third ontological category?"

      I will very happily admit that I do not understand these questions, presumably at least in part because of my ignorance of philosophy. Perhaps this means that there is something I'm unaware of that renders my understanding of linguistics incoherent; I don't know. The only point I was trying to address was the issue of whether the finiteness of brains makes it incoherent to say that an I-language generates infinitely many expressions/instructions. There may well be other factors --- perhaps other properties of brains, or other factors that I would only appreciate if I understood what an ontological category was --- that make this claim incoherent. But as far as I can tell the finiteness of brains does not make it incoherent.

    24. " If on Chomsky's view language is a biological organ, it is a physical object and as such finite.": language *as* a biological organ strikes me as incoherent, language as a *function* of a biological organ makes sense, although to me it looks like a function of an organ system that does many other things; to what extent and in what way the properties of this system are shaped by the needs of language is an open question.

      By analogies, do you mean the skeleton one? I don't find that so useful because the angles can be approximated by rational numbers, and in fact have to be to carry out any simulations whatsoever, whereas in grammar, the real world finitistic limitations obtrude themselves in a very different way. Ie your parser handles 'John loves Mary' with aplomb, but crashes on some 82 word specimen from Gibbon, even tho all the grammar rules are in there.

      The reason I personally am not willing to adopt Platonism is that it has never made any sense to me.
      I'm happy to view the infinite set of structures generated by some model of grammar as a Platonic object, but I don't study it, I use it. The Postal and Langendoen Vastness book appears to me to be a major corroboration for the claim that there is something deeply wrong with it.

      But perhaps it will be useful, or perhaps not, to point out that the 'driver' for the infinitary conception is the idea that linguists are supposed to 'capture the generalizations' about the data, so that for example looking at 'Mary's dog barked', 'Mary's mother yelled',
      and 'Mary's mother's dog yelled', we are attracted to a recursive NP rule that predicts 'Mary's mother's mother yelled', and this also seems to be possible, and then there's no natural place to stop. If you don't believe in 'capturing the generalizations', then the reason for the recursive NP rule for English goes away. But the legitimacy of trying to capture generalizations in with rule formulations or things like them is another bucket of prawns that has been sitting in the sun for some time.

    25. This comment has been removed by the author.

    26. Avery Andrews noted that K&P “seem to [him] to attacking a much broader range of ideas [than Chomsky’s], including ones [he] consider[s] quite sensible, including [his] own.” It doesn't only seem to him, it’s just so.

      In fact there is only partial overlap between Platonist and cognitivist theories. For example, Platonists would have to tear themselves apart to produce a theory of language acquisition, which is what I-language is about.

    27. Okay a bit ontology/epistemology 101 because for this debate it is really important. Ontology is about questions of existence. So 'Does the sentence 'Chomsky is a famous linguist' exist? is an ontology question. In this case we can answer 'Yes'. Another ontology question is 'what kind of 'thing' is this sentence'? Is it a physical thing [like the letter combination you see on your screen right now or the neuron activation in your brain when you read it or the sound-waves when you hear it etc.] If this is the ontological nature of sentences, then they are what we call tokens and tokens are physical objects. Another possibility is that the token you see on your screen is not the sentence itself but a representation of it. On this view sentences are types or abstract objects that do not exist in time and space.

      Now lets come to epistemology: this is what we KNOW about the world and how we can know stuff. Knowledge is the foundation for our beliefs. Ideally our beliefs are an accurate representation of the world. Sometimes they are but often they are not. [There is a lot of philosophical debate about what "knowledge" is, if we actually Know anything etc. but this does not matter here.]

      Now here's the thing. Leaving special cases aside, what I know about an object does not change that object. And if i have false beliefs we often do not know this. Sometimes it is easy to correct a false belief: If i think i can walk through a wall my belief does not change the wall and i am stopped in my tracks. But in many cases things are not so simple. Language is a paradigm example where what we think we know could be vastly different from what language actually is.

      Further our models that we use to 'describe' the world can [and often do] use simplifications, generalizations etc. etc. But these do not affect in any way the objects they are models of. I can model a car as a mass-point but this does not turn any actual car into a mass-point. So from the example above I can use a mathematical model to generate instructions and when i talk about this MODEL it is not incoherent to suppose that there are infinitely many expressions/instructions generated by this formal model. But if I talk about an implementation of the model [in the brain] I am talking no longer about the model and things change.

      Take a simple example. You need a garage for your car. When you tell the contractor the measurements of your car you make a mistake and he builds based on the measurement he got from you. He can build a perfectly fine garage but since your car did not change because of the measurement error it may not fit through the door. Same on a much more complicated scale for the models of language: A model can generate infinitely many expressions/instructions. A language consisting of abstract objects [like those suggested by Platonists] can be described by such a model. But such a model cannot change a physical brain.

      Now if Chomsky is right and language IS a biological organ, then it is a physical object no matter how many bells and whistles it has. And then it becomes incoherent to talk about "a recursive NP rule for which is no natural place to stop". If language is the same as knowledge of language and located in the human brain, there MUST be a natural place to stop - determined either by what can be stored by the brain [or language organ which presumably is just a part of the brain] or by what can be generated by it [where 'to generate' must mean something a physical organ is capable of doing NOT an abstract procedure].

    28. According to me, language is *not* a physical object, but an aspect of the behavior of a kind of one, which requires a different kind of analysis from what I see above, and where there might be better or worse prospects for learning anything about the structure of the (kind of) physical object that produces it. If there is a natural stopping point, what is it, and how could anyone find out what it is?

      What actually seems to happen is that intelligibility and/or probability of production tails off gradually, with some odd 'bumps' in the curve for various purposes, such as the humorous effect produced by this production of (reputedly bad) German: "der halbe FreundesKreis meines Freundes Schwesters Exfreunds Bruder". And different modes of combination tail off differently, with addition of another sentence to the story being the least constrained, center-embedding the most.

      To make progress on this, I think we probably need to forget about infinity for a while and contemplate on why most linguists think there is at least phrase-structure or something roughly equivalent rather than a list of possible sequences of parts of speech. Or just a list of sentences.

  5. I'm no philosopher so most of this discussion is "above my pay grade" (as Norbert likes to say), but it does seem to me that all the talk of sets might be creating unnecessary confusion, in a couple of ways.

    First, even though it's common to find claims along the lines of "Merge applied to x and y produces the set {x,y}", particularly in some of the more big-picture, programmatic discussions of generative linguistics, the day-to-day practice of generative linguists does not seem to me to have anything much to do with this formulation of merge. Certainly it doesn't *depend* on merge being formulated this way. A description more in keeping with most day-to-day practice might be to say something like: merge applied to x and y produces a tree which has x and y as its immediate subconstituents, and has a label indicating whether x or y projects. See Stabler's formalisation for one concrete version of this. (Or, if you really want to insist that labelling is something extra on top of merge, maybe it's just the tree with x and y as immediate subconstituents, although again I think the large majority of boots-on-the-ground work uses the term "merge" for something that creates an asymmetric relationship between its operands.) Anyway, the point is, merge need not be construed as creating a set, and indeed in practice is often *not* construed that way.

    Second, even if you do want to say that the structures created by merge are sets, there are no infinite sets anywhere. At the risk of pointing out the obvious, there is no upper bound on the size of the structures created by merge, but every such structure is finite. There are infinitely many structures that can be created by (repeated applications of) merge. If you want, you can think about putting all those structures into a set, and that set will be infinite. (You might even want to call that set "an E-language".) But there's no particular need to do that, and nothing much seems to come of doing it.

    Given both of these two points, I can't see any reason to invoke the notion of a set at all, anywhere. So, whatever it might mean to ask whether "sets are in our heads" (I'll leave this to the philosophers), if this question is taking on some extra slipperiness due to the mention of sets, then perhaps that slipperiness can be avoided by replacing this term with "finite trees".

    And as regards the first point: one might object that the claim that merge produces {x,y} should be taken as the real or relevant definition of merge, because it's in print with a prominent person's name on it, so we should take it seriously. But if the issue is the status of the generative enterprise on the whole, then it seems much more relevant to look at the kinds of assumptions that generative linguists make in everyday practice. Particularly if one has already been asked a couple of times to focus more on the boots-on-the-ground nuts and bolts of everyday linguistics, and less on big-picture overviews that don't get into the details.

    1. I completely agree that there has been a lot of confusion. For exactly this reason I have asked Norbert for a clear definition so we all know what we talk about HERE. A view that requires no sets may not be subject to the incoherence challenge [but it would no longer be the view Chomsky has defended for roughly 60 years now]. And a language that has upper limits on sentence length or size of lexicon would be certainly vulnerable to some of the criticisms by Jan Koster. But again that is speculation depending on what the exact definition of LF is. Once Norbert has provided one we can look at it and see if it avoids internal incoherence.

  6. Before addressing some of the errors in Norbert's presentation of Postal [2003] available here: I would like to clear up a confusion about platonism he introduced independently of this paper.

    "Fourth, if one is a serious Platonist about NL then the following becomes a serious option: normal native speakers of an NL may not know their own NL. As Postal has correctly noted, a native speaker is exposed to at most a miniscule subset of the linguistic objects of an NL. And given that we don’t identify an NL with a mental state of a native speaker, there is no reason to think that native speakers in general do know their NLs. I find this view to be, ahem, odd. If native speakers don’t know their own NL then nobody knows anything. But for a Platonist the fact that many/most/all native speakers know their NL in virtue of being competent speakers of their NLs should be a bit of a surprise."

    Since according to the Platonist's view, NLs are collections of abstract objects, hence themselves abstract objects, the only sense of 'X's own NL' is the one that X has (native) knowledge of. So then one cannot fail to know one's own language. To say otherwise reveals misunderstanding of the Platonist committment.

    Now I imagine [but could be wrong] that with a clever philosophical argument Norbert arrived at his conclusion via this possibility: Given that a language like English is 'infinite' on the Platonist view I could know a collection of tokens of English sentences [A] and Norbert could know a collection of tokens [B]. There could be zero overlap between [A] and [B] and so I would not know a single token of Norbert's collection and vice versa. Now, [1] since I do not know any token of [B] but [B] is a collection of tokens of English I do not know English. But [2] I do know [A] also a collection of English tokens - so I do know English. From [1] and [2] it seems to follow that I do not know my own language. Amusing but we probably should put this case aside.

    Of course even in 'real life' no 2 speaker's collections will be identical. So there are actually tokens of English Norbert know and i don't [maybe even the other way around]. But from this fact we hardly can conclude that neither of us knows their own language. For a Platonist it is perfectly acceptable that the vast overlap between Norbert's and my collection is due to the fact that we communicate with one another and such communication can only work if we have largely overlapping token sets.

    Anyone tempted to insist on the case i set aside might note that on the Chomskyan view of internalism similar problems could arise. If the main function of language is to express my thought then any subset of English my brain might activate will do {I KNOW what i am thinking]. Similar for Norbert's brain and at least hypothetically it is also possible there is little or even no overlap between my and Norbert's I-language. Add the Chomskyan claim that communication is NOT an important function of language, then there is no reason for our I-languages to be similar. I suggest this hypothetical case is prevented BECAUSE communication is an important function of language - so a fairly large overlap of I-languages is required. But the same holds for the Platonism case.

  7. First I want to express my full agreement with Norbert re: “Incoherence is a nasty vice. Were the biolinguistic-mentalist interpretation of linguistics truly incoherent this would be a serious problem.” I also agree that Postal’s argument as reconstructed by Norbert would be silly. But, this is a problem of the reconstruction, not of the original argument and I will now show where Norbert went astray. The numbered summary is essentially correct: Postal argues that it is a mistake to conflate Natural Language and Knowledge of language. From this Norbert concludes:

    “As Postal emphasizes, the key critical observation is that the ‘knowledge-of’ or ‘know’ is a two-place predicate/relation. From this it follows that NL is distinct from knowledge of NL and so any attempt to analyze NLs as mental states must lead to incoherence.”

    This is not what Postal claims. The conflation of knowledge of language and language is certainly a mistake on Postal's view. But this conflation alone does not make Chomsky’s view incoherent. If there could be a 1:1 relationship between the sentences of NL and mental states representing such sentences no incoherence would arise; even if one would mistake one for the other. The examples Norbert provides are of the kind that [at least under one widely accepted philosophical analysis] such 1:1 relationship exists. “To say that Sam has a headache is not to postulate a relation between Sam and some headache. It is to ascribe to him a certain unpleasant mental state”. True, and once Sam has acquired knowledge of language he can label this mental state by saying “I have a headache”. So Sam knows when he is in the relevant mental state [leaving aside here clever philosophical arguments about Martians, phenomenal zombies etc. etc.]. Because we have this 1:1 relationship talk of ‘Sam knows he has a headache’ is in ordinary discourse not different from talk of ‘Sam has a headache’.

    The problem for language and knowledge of language is that no such 1:1 relationship exists between possible mental states [finite] and sentences [expressions] of an NL [infinite]. Demonstrating this is the point of Postal’s arguments under the header ‘NL as organ state and yet infinite’, which Norbert dismisses as ‘lambasting Chomsky’. The point is fairly simple: language as biological organ [or state thereof] has to be finite. Knowledge of language has to allow for infinity. Postal cites Chomsky who claims we have “the task of accounting explicitly for the unbounded range of sentence structures in a particular language”[cited on p. 142 of Postal 2003]. As long as the biological organ and the ‘[knowledge of] the unbounded range of sentence structures' are metaphysically distinct no conflation occurs and no incoherence arises. But for Chomsky the two are NOT so distinct. And, while in the headache case we can have a 1:1 relationship between brain-states and knowledge [about being in such a brain-state] in the case of natural language this is impossible. Hence the conflation leads to incoherence.

    In addition, Postal makes an interesting argument demonstrating that, pace Chomsky’s repeated claim that a human child can learn every natural language, one can conceive of natural languages that cannot be learned. This argument [p.140-142] does not rely on claims about infinity because the language Postal envisions is a 1:1 mapping of a learnable natural language. So a critic who wishes to defend Chomsky against the previous challenges based on the actual/potential infinity distinction Norbert seems to rely on cannot use this line of defense here.

    Final point: as philosophers we know that dismissing one argument that alleges incoherence does not establish that there is no incoherence. Once the possibility has been raised that biolinguistics is internally incoherent the burden is on the proponents of biolinguistics to demonstrate that their view is not internally incoherent. Chomsky’s recent remarks suggest, that at the present time biolinguists are unable to discharge this burden.

  8. Avery,
    The following Chomsky quotation you gave reveals pretty well what I find questionable --the most civilized adjective I can think of-- about the whole approach:

    "An I-language is a computational system that generates infinitely many internal expressions, EACH OF WHICH CAN BE REGARDED AS AN ARRAY OF INSTRUCTIONS to the interface systems, sensorimotor (SM) and conceptual- intentional (CI)." [Chomsky, 2007, p. 5]"

    It is an astonishing denial of the external aspects of language. Thus, for me at least, the words of English are complete interface elements, with a pronunciation, connections to the conceptual system and combinatorial properties (called "valency" by the old structuralists). All these properties are objective in the sense that they largely exist independently of anybody's "internal expressions" or individual "instructions to the interface systems". These properties existed before we were born and will survive after our death. Obviously, a multitude of these properties could even be retrieved after some event in which ALL speakers of English would instantaneously die out. We may assume that not all properties of English could be retrieved after this great extinction, but that is true for all cultural objects, which have an existence depending on BOTH external properties and inner mental faculties. Thus, a painting of Rembrandt clearly is an object existing outside of our minds but cannot be interpreted, or even recognized as a masterpiece, by cats and dogs.

    The external dimensions of language are perhaps its most essential ones because there is not the slightest reason to believe that our inner capacities, no matter how impressive and innate, have linguistic functionality without the external dimension. It is its blind spot for the --essential-- external dimension that makes Chomskyan (bio-)linguistics less credible for me in the long run.

    Where does Platonism enter this picture? Even after Christina's zillionth post, I still don't see why the virtual infinity of algorithms and their implementation in finite physical devices is a problem. I do sympathize with Katz, however, in his remarks about the type-token distinction. Platonism comes in various, sometimes questionable, flavors, but one of Plato's key insights seems to uphold pretty well, namely that universals (ideas, concepts, types, whatever has normative aspects) cannot fruitfully be described naturalistically (as would probably be one of the long-term goals of biolinguistics). An idea is more abstract than any of its physical realizations. That's perhaps hard to accept for those with a physicalist world picture, but in my view they'd better get used to it.

    1. I think the 'infinity' issue comes from skipped steps here which are 'the finite set of functions/operations' and 'the finite lexicon'.

      - organ, entertain FL, is biologically finite
      - the set of operations or functions that FL performs is finite (grammatical rules).
      - lexicon (although large) is finite.

      These two (finite) ingredients in principle give you the capacity to produce the output of infinite number of expressions realized as sentences. It's lexicon and grammatical rules that match an organ in terms of finiteness. Perfectly coherent in my view.

    2. @Jan: I certainly agree about the insanity of ignoring the external dimension. I think this step might have happened at some time between Aspects and Knowledge of Language, since the Aspects notion of 'competence' seems to be plausibly understood as a kind of idealized performance, whereas the discussion of i-language in KLT I find unclear.

      Somehow or another, the external performances cause different people to acquire internal systems that wind up functioning in roughly compatible ways, and I don't understand how this interesting fact can be investigated without making some tentative proposals about internal mechanisms and structure. Chomsky seems to act as if he thought that the 'Veil of Ignorance' was nonexistent, while K&P (and quite a number of other people) seem to think it is absolute. Thicker in some places than others would be my guess.

    3. I think it's very likely true that Chomsky doesn't currently view a person's linguistic competence as an abstraction over their performance systems (if this is understood to mean that the latter are somehow “more real” or “more fundamental” than the former). Ad Neeleman and Hans van de Koot recently defended a conception of the competence/performance distinction along these lines and they seem to take this to be a non-standard characterization of it.

      I personally doubt that Chomsky took this view of competence even in Aspects, since for someone who has certain philosophical tendencies which I darkly suspect Chomsky of having, this would get the causal and explanatory priorities backwards. It's because I have linguistic competence that I'm able to produce and understand sentences, not vice versa. This could be denied by analogy with philosophical theories of knowledge according to which knowing that X reduces to having a certain set of abilities. My hunch is that Chomsky doesn't have much sympathy with that kind of theory.

  9. I wonder whether part of the confusion might not be due to the fact that those Platonists accusing biolinguistics of 'incoherence' do not see or do not accept that also biolinguistics is obviously based on 'platonism' in the sense of idealization, but that it is not the language that is taken to be the Platonic object, but the language faculty, as embedded in an idealized human being? The fact that real language faculties probably will always have an upper limit on sentence length is because they are embedded in human beings that die. But death is irrelevant for the study of human language. (The previous one was a sentence I just liked to write.)

    Maybe this also answers Jan's objection. It seems to me indeed that Chomskyan biolinguistics studies a system that is running on its own, without the external dimension. I can understand why somebody would think that such an idealization makes the program less interesting (I might be tempted to think that), but not why it is "less credible".

    1. I did not want to say this in this forum but since you raised the issue I have to point out that Platonists are by no means as naive as some of the things i have said might suggest [and seemingly did suggest to you]. They are of course aware that there IS a very strong Platonist element to Chomsky's linguistic, a point explicitly made here:

      "in advancing set-theoretical accounts of NL structure, Chomsky again just abandons his own putative ontology and proceeds, but only incoherently, as if he had a realist one which permitted him to sensibly view sentences in set-theoretical terms. That is, his ontology is evidently so awful that even he pays no attention to it when actually considering real linguistic matters". [Postal, 2009, p. 257] also at

      It is not MY aim in this forum to accuse Chomsky of salvaging his biolinguistics by adopting a dishonest Platonism. All I am trying to show is that if one assumes Chomsky to be honest and honours what he has put in writing as a genuine expression of his view then THIS view is internally incoherent.

      And to address another point: i also agree that the internal incoherence is not the ONLY [or maybe even the most serious] problem that biolinguistics faces. It is one problem I have adressed [since it is one of interest to philosophers] but I fully agree that for most working linguists OTHER problems may be much more worthy of immediate concern [e.g. some of the problems Jan mentions or some of the problems I discuss here: ]

    2. But who are these Platonists? You are not a Platonist, Norbert is not, I am not, Katz is dead. Is Postal the only one? Why are we even discussing this?

      There is an implicit claim here that the philosophical arguments about the metaphysics of abstract objects are relevant to the a scientific problem. That seems indefensible.

      Are you happy to say that a computer or a human can represent an abstract object like a number, or a game of chess? Isn't that enough?

    3. I do not know why you have taken part in the discussion. For me the answer is simple. If it can be shown on metaphysical grounds that a position [like biolinguistics] is internally incoherent, then this is reason to abandon this position [in my books a similarly strong reason to say discovering that a model you want to use is based on the assumption you can square a circle]. And I think if we take Chomsky at his word and consider what he has put in writing as expression of his view, then his Biolinguistics is incoherent.

      Am I happy to say a computer or a human can REPRESENT abstract objects? Sure, I AM [as is Postal BTW]. So if there is a difference between language and knowledge of language I do not have the incoherence objection. But recall the title for this discussion: Going Postal - and Norbert's claim;

      "there is nothing incoherent about analyzing knowledge of language as having/being in a certain mental state. On this well known view, knowing English is one state, knowing French another etc. UG will be the study of these possible states and how they arise in the minds/brains of their possessors. We can ask how these states enter into processing and production, how brains instantiate these states etc."

      These brain-states are finite and if language = knowledge of language, then these brainstates ARE language, they are all there is. In this case language has to be finite [which contradicts what Chomsky insists upon].

      You can of course adopt a different view of what language is [and i think that is a very smart move] that allows that there IS a difference between language and knowledge of language. Then one has to look at what language is [ontologically] on your view. Maybe there is a coherent non-Platonist view of language [Tomasello certainly thinks so]. Since a view that distinguishes between language and knowledge of language is not necessarily internally incoherent there's no reason to reject it on ontological grounds.

    4. A metaphysical theory can be metaphysically incoherent but I do not think a physical theory can be metaphysically incoherent -- it makes no metaphysical commitments at all.

      The only problem is that you insist on making some problems with the finiteness/infiniteness claim. Yes, I agree that the universe is finite, but there is no problem at all with infinite models, that are approximations to the finite reality. I think we agree on that.

      I agree that Chomsky is not always clear on the distinction between I-grammars and mathematical models of I-grammars. So the only objection here is that he says language is infinite and he should say "language is best modeled by an infinite system even though it is finite". But I have suggested a reason earlier on why he sticks to this inappropriate manner of speaking.

      I also agree, with everyone I think, that Chomsky's use of the term knowledge is misleading. But that is a different issue.

    5. Alex, that nobody you know Is a Platonist is not an argument against it. In this case, at least, there is no clear distinction between metaphysical questions and scientific problems. The trouble with abstract objects is that there is not a single known naturalistic theory accounting for them. Explanation in the latter sense should not be confused with representation of abstract objects, which is, trivially, an uncontested option. You don't believe that your laptop understands numbers or chess, do you? Representational devices get their intentionality from the human mind ("derived intentionality"). The intractable, but crucial question is where the human mind's representations get their intentionality from.

    6. This comment has been removed by the author.

    7. @Alex: one p.s. to my previous post: You say: "But who are these Platonists? You are not a Platonist, Norbert is not, I am not, Katz is dead. Is Postal the only one? Why are we even discussing this?"

      If there is a possibility that Platonism is right then it does not really matter how many people hold the view, does it? *We* [=the majority] have been wrong in the past. I have a book that recounts a similar conversation Copernicus had in 1494. 'No one besides you holds this crazy heliocentric view - why should we care?' His answer was 'Because there's something wrong with the geocentric view and I can 'fix it'.'

      I have come to believe long before i ever talked to Postal and heard of linguistic platonism [is not exactly a subject taught in phil-o-language] that there are problems with Biolinguistics. But like David I never got over the suspicion [in David's case probably conviction] that there is something important missing in the 'empiricist' alternatives. If someone like Postal can account for these 'missing pieces', then I think i ought to pay attention. I have bombarded Postal with objections to Platonism for more than 3 years - so far i always came home with a 'bloody nose'. Maybe I am just not too bright [but then it seems even Chomsky has no refutation up his sleeve] or maybe he's the Copernicus of Linguistics, the one who is right even though the vast majority of linguists [and philosophers] think he's wrong?

    8. @Jan, Of course it is not an argument against Platonism. It is an argument against spending much time constructing arguments against Platonism.

      I think there is interesting work being done in naturalistic accounts of intentionality, some of it in my own department (e.g. Nick Shea).
      But this is clearly orthogonal to the Chomskyan program.

    9. @Alex, I was not really thinking about "naturalizing representational content" (Shea), but more about (the absence of) successful naturalistic theories of abstract objects, like types. Is there any must-read literature I am overlooking? If Plato was right, naturalizing abstract objects is not possible, which would be bad news for much of the potential domain of biolinguistics.

    10. @Alex: i am not sure what to make of your comment

      "A metaphysical theory can be metaphysically incoherent but I do not think a physical theory can be metaphysically incoherent -- it makes no metaphysical commitments at all."

      Maybe I misunderstand what you say but if i don't it is clearly wrong. Physical theories DO make metaphysical commitments and have been refuted based on those. For example the many-worlds interpretation of quantum mechanics, which
      is certainly a physical theory, has been rejected because of its excessive ontological burden: the axiom system of QM is a physical theory but many-worlds/Copenhagen/etc are all 'metaphysical'.

      And the 'biolinguistic' story about the interpretation of grammars is in the same
      position: it corresponds to a metaphysics/ontology assigned by a particular 'metaphysical' (ontological) stance to a theory of the form of natural language. What the theory itself consists of is a formal statement of the constraints that yield the patterns in the observed phenomena.

    11. "Physical theories DO make metaphysical commitments and have been refuted based on those. For example the many-worlds interpretation of quantum mechanics, which is certainly a physical theory, has been rejected because of its excessive ontological burden."

      I agree with you that physical theories (like all theories) have ontological commitments, and that there are cases where they have been rejected by people because they think these commitments aren't attractive. but rejecting a theory because one doesn't like its ontological commitments is a far-cry from refuting the theory, isn't it? i'm no quantum physicist and i may simply not know the relevant work but i've never heard of the many-world interpretation being _refuted_...

    12. Christine, I am just not sure I follow your argumentation. You seem to be saying that 'biolinguistics' is incoherent, because one can find in Chomsky's work things that contradict each other.

      But the latter fact is, to me, completely uninteresting. It seems only interesting to Chomskyologists; biolinguistics is not necessarily to be equated to 'everything which Chomsky' says. One can take a Platonic view on it: that it is an idea, independent of whatever individual person. I would prefer such a view. And I don't still don't see why biolinguistics is then *necessarily* incoherent.

      And even to Chomskyologists, the fact that Chomsky seems to say contradictory things does not seem to be so interesting. As has been pointed out already, the fact of the matter is that almost every word of Chomsky has been recorded and published by somebody. Furthermore, one could say that even in his own writing it is clear that language is mostly 'audible thought', i.e. Chomsky is somehow thinking aloud. (I have to admit that it is an aspect of a lot of syntactic literature which I have always found a bit difficult myself: this tendency to change one's definitions while writing. And Chomsky is clearly the origin of that tendency. But that can never be a criticism of the theories which are expressed in this way.)

    13. As I said several times: if you want to call something that is very different from what Chomsky proposes biolinguistics I am not refuting THAT as incoherent unless you specifically tell me all your commitments - they may or may not be incoherent.

      I have to admit that as someone coming from the sciences I find the cavalier approach to contradictions in what Chomsky publishes troubling. If he has in fact been misrepresented, he of all people is quite capable to correct such misinterpretations [as he has done for example with Boden] and say clearly what he means. Also I doubt for example that in Science of Language he has been misrepresented. James McGilvray is a very close friend of Chomsky and very careful to record exactly what Chomsky says. He has no motive whatsoever to misrepresent Chomsky and I severely doubt he does. Yet the book is teaming with contradictory statements.

      I can only base my criticism of a theory based on how it has been expressed and based on the results it has produced. That is how the game of science is played [to borrow Norbert's terminology]. If it is up to me to read into Chomsky's theory what I want and up to you to read into it what you want then we are no longer doing science...

    14. One of my problems with this discussion is that it seems to alternate between issues that have been of a serious concern since the 1960s, such as the putatively infinite number of sentences, and the possibility of inferring anything at all about the structure of the brain from examining some of the behaviors it seems to produce, and the problem of trying to figure out whether anything Chomsky has said in this century makes any sense, or since the rise of Minimalism, or GB, or whenever you started having difficulties with it. The further back you go, the greater range of linguists' thoughts and practices it will have relevance to.

    15. I think part of this particular problem is that people who hold vastly different commitments call themselves 'Chomskyan'. Read for example Fiona Cowie's ''What's Whithin' that makes the point explicitly more than a decade ago. I do not agree with everything in her book but she is right that Chomsky holds several core commitments that can be also defended independently of each other. She identifies 5 for Chomsky, and says some people who hold 2 of those but reject the other 3 but call themselves Chomskyans. But this creates problems because if i criticize Chomsky based on a commitment he holds, some Chomskyans feel under attack even though they do not share this particular commitment. Add that Chomsky has also been criticized for things he never said and you have ample opportunity for confusion [not to mention quite needless hostility].

      In an ideal world none of us would ever make a mistake. In a much better world than ours we all would correct mistakes we have made the moment we become aware of them. But i think even in our world it is not asked too much of someone like Chomsky to respond to a criticism like Postal's that has been around for decades. All distractions aside Postal has used passages from Chomsky's own writing to show that Chomsky holds 2 commitments [call them A and B] that are incoherent. There are really only 2 ways to respond to this criticism. Either Chomsky could say: yes, A and B are incoherent [thanks for pointing this out Paul], I herewith drop B and only hold on to A. Then everyone knows that we should no longer attribute B to Chomsky and move on with our lives. Or he could respond by saying: even though A and B may appear to be incoherent they are in fact not and here is why. In the entire debate we had here only one person has actually taken up this line of defence by saying 'maybe we have sets in our heads after all'. Possibly that was meant as a joke but that is exactly what needs to be shown if we want to show that Chomsky's A and B are coherent. Virtually everyone else in the discussion [including Norbert in the post we all comment on] have said something that boils down to A and C are not incoherent or B and D are not incoherent. That may or may not be the case but it does not address Postal's challenge. Now obviously someone who does not share Chomsky's commitment to A AND B does not need to worry about the incoherence. But it does not follow that Chomsky should not worry about it...

    16. Part of the reason for this annoying behavior of linguists is that fundamentally, most of them don't really care whether all of Chomsky's claims make sense (I'll admit that if none of them did, it would be a bit worrisome), even less about whether all or even any of them do or don't make sense or add up to a consistent whole under some other discipline's conventional interpretation of what the words he uses are supposed to mean; what they care about is whether what they themselves are doing and teaching to their students makes sense and is viable as a basis for future research. E.g. both Jan and I have come to the conclusion that Chomsky's apparent lack of interest in the external aspect of language does not make sense.

      So, I've been trying to get attention focused on the Veil of Ignorance argument because it has been causing trouble from the very beginning; Quine for example had a whinge based on it in the 1970 _Semantics of Natural Language_ volume, and many (possibly all) of the other features of generative grammar that people complain about flow from it. So for example we have unbounded sentence length because there's no natural stopping point for iteration and edge recursion (for central recursion you could probably get away with decreeing 5 as the limit), and then because the set is infinite you're supposed to need UG (at least until algorithmic complexity theory came along).

      I did read Cowie's book not too long after it came out, and even had a bit of discussion of the issues with her at a seminar (I don't think I won); I suspect I would be more sympathetic to some of her points now then I was then, but can't review it any time soon.

    17. You say: "...fundamentally, most [linguists] don't really care whether all of Chomsky's claims make sense (I'll admit that if none of them did, it would be a bit worrisome), even less about whether all or even any of them do or don't make sense or add up to a consistent whole under some other discipline's conventional interpretation of what the words he uses are supposed to mean"

      Well, first of all thank you for clearing this up for me. Based on what i have been reading in Chomsky's own work from way back in the 1950s all the way to 2012 [the title of his last book was 'The Science of Language'] is that linguistics has been turned into a natural SCIENCE - to a large degree because of HIS efforts. This is what not only his supporters but even critics such as Sampson, Seuren, Postal, or Boden [to name a few] agree on, what Norbert has been telling us on more than one occasion on this blog. There have been biolinguistic journals and book published advocating interdisciplinary work with OTHER scientists. Now you tell me this has all just been a big ruse? That it would be merely A BIT worrisome, if NONE of Chomsky's claims made sense? Heck that is by magnitudes worse than anything I wrote in my Potpourri which deals with ONE book not his life-time accomplishments [why did some people get so upset by it then i wonder - was I too charitable?]

      Now if Chomsky's activity has not been science, what was/is it? If linguistics is not a natural science, WHAT is it? You say what linguists "care about is whether what they themselves are doing and teaching to their students makes sense and is viable as a basis for future research". Is there any objective standard regarding WHAT makes sense or is that up to the individual linguist? Forgive my ignorance but based on your description linguistics resembles more an activity like that of modern-art critics: we all look the same expressionist work but as long as it makes sense to me what i interpret into it it really does not matter what you think makes sense? Or whether anything in the picture resembles my interpretation of it? Of course, if linguistics truly is the kind of activity you have pictured, then being worried about a few contradictions here and there in inappropriate...

    18. @Avery. Could not disagree more. On the big points Chomsky has been essentially right. He has correctly identified the main research problem and has made important contributions to its solution. I too have disagreements with him. But the idea that some are mooting, I.e. that his views are effectively incoherent, is garbage. Anyone who thinks as much either knows nothing or is delusional. I don't count you in either group, so I assume that you did not intend to suggest this.

    19. @Norbert: thank you for the reassurance that linguistics remains a science. Avery had me worried for a moment.

      I assume you do not mean to suggest that Paul Postal either knows nothing [I recall you saying otherwise] or is delusional. Possibly you mean to suggest that Paul does not know some important details about Chomsky's view? That is of course possible and if the entire discussion here has established nothing else, one thing has become clear: Most people are not entirely sure what exactly Chomsky's current view is [never mind what he may have said 10, 20, 40 years ago]. Some have said quite clearly that they do not care whether Chomsky's view is coherent as long as what they take from it is. Nothing wrong with that attitude.

      But some of us do care about Chomsky's view. And it is probably safe to assume that everyone voicing an opinion here knows a lot MORE about Chomsky's views than the average person who is interested in the science of language. To state the obvious: we have not reached a consensus. I think there's no point to go over the arguments one more time [or even 10 more times] - chances are we'll continue to disagree. I do not really care what you call me. But I doubt you will convince many that Paul knows nothing or is delusional. We are at an impasse and only one person can change that: Chomsky. He could write a book as crisp and clear as Syntactic Structures and lay out his current view. Maybe he can show how sets can be 'naturalized'? I recall the interview in which he made what Paul calls the foundational admission was in 2004. That is 9 years ago - surely Chomsky's thinking has evolved since then?

      To answer a looming objection: Why should Chomsky care that I think his view is incoherent? He shouldn't. But there's a world out there. You may have noticed Paul's paper 'Chomsky's Foundational Admission' has been downloaded 2700 times. So far no one has presented him with an argument that convinced him he is wrong. As far as I know your attempt here was the most detailed and direct response. Now on a very conservative estimate there are roughly 1000 people out there wondering whether Chomsky's view is incoherent. Some of those might be reviewers for important journals. David thinks many of these reviewers are not biased against work coming from biolinguists but ignorant that such work exists. Would they be motivated to change their state of ignorance if they have just read the biolinguistic view is internally incoherent? Even if they have doubts, they'd hardly go through the gazzillion posts here and then, maybe, agree with you. But they certainly would read a concise book [or paper] by Chomsky with a snappy title like 'Platonism Naturalized' or "Illusions of Incoherence'

    20. @Christina&Norbert, will have to delay on replying properly to this because my house phone-line internet has ceased to work for reasons as yet unknown, so time is limited. I will however stick with the claim that individual linguists care more about what makes sense to them than what is in accord with a text written by somebody else, & the 'magic of science' is that groups of people doing this (and communicating with each other about their results and interpretations) sometimes at least seem able to converge on ideas that really work better than what is gotten by trying to find the correct interpretation of previous writings. This is something we didn't know 400 years ago.

      & I don't think Chomsky 'made linguistics a science'; it was a somewhat confused immature science before he started working, and then became a hopefully somewhat less confused and immature science thereafter. But I don't think he got everything right.

    21. Everything? Who believes that? The fundamental problem? Yup, me!

    22. To say a bit more about my inflammatory statement "[most linguists don't care] whether all or even any of them do or don't make sense or add up to a consistent whole under some other discipline's conventional interpretation of what the words he uses are supposed to mean" carries a qualification about other disciplines' interpretations of the words; this is crucial because obviously nobody would be any kind of Chomskian if nothing C said made any sense to them, but most linguists are not intellectually concerned at all the problems that people have who seem unable to help themselves as interpreting the term 'rule' as something that is in some sense written down in their minds and consciously consulted in order to be 'followed' (not a rare disorder amongst philosophers; the egregious PM Baker and GP Hacker would be amongst the noiser sufferers, iirc).

      Chomsky and others have tried to combat this by inventing the word 'cognize', but it clearly made no difference at all. I think it's very unlikely that Chomsky could sort things out by writing a clearer book, even if he decided that he absolutely had to do it.

    23. @Avery: It seems there is some misunderstanding about two points. First there had been a debate about 'knowledge' of language that resulted in Chomsky changing his terminology to 'cognizing'. It had to do with the fact that some philosophers mistook Chomsky's usage of 'knowledge' for the philosophical usage of 'Knowledge'. But from what I can tell this confusion has been pretty much cleared up long time ago - and mainly because Chomsky has been quite clear about what he meant and didn't mean regarding having a system of rules and following rules vs. having explicit knowledge of these rules.

      The issue that has not been resolved is whether knowledge [or cognizing] is a two place relationship between Language L and a person who has knowledge of L [as on this blog Paul Postal and Jan Koster have claimed] or if knowledge of language is being in a certain brain-state. To use Norbert's example, on this view knowledge of language is comparable to having a headache: knowing I have a headache and having a headache is the same thing, so it does not make sense to speak about my knowledge of my headache [though a doctor can gain some 3rd person knowledge about it]. This is a very different issue from the one that has been resolved. And given that neither Paul Postal nor Jan Koster are some obscure philosophers who know next to nothing about linguistics, it would seem sensible for Chomsky to take their concerns seriously and address them, if he can. Given that there is also unclarity among the undoubtedly very knowledge people who have contributed here, it is sensible to assume there is at least some unclarity among reviewers for journals or grand-proposals and if Chomsky could clear up this unclarity it certainly does not seem like a waste of his time - given that he deeply cares about the field.

      The second point is that i am really surprised you would think linguists do not care about the things we have been discussing here. Take a look at the comments sections under each blog. Norbert is certainly providing an excellent service for us and covering a wide variety of topics. Yet his "How to Study Brains" has zero comments [something I found very surprising because I was looking forward to watching the linguistics experts debate this very interesting topic]. Charles Yang has contributed a very interesting post "Learning Fast and Slow I: How Children Learn Words..." which has 2 comments. In comparison "Going Postal" has 100 comments. No one was forced to post anything here, so i would assume people are interested in this topic and I am not sure why you would draw the conclusion that linguists do not care about if what Chomsky writes is incoherent.

    24. Yang's posting is about relatively new work - I'd prefer to learn more about it before asking questions, & I don't have any problems with Norbert's How to Study Brains, in principle at least (tho I don't understand why people are obsessed with binarity, and why Norbert wants to produce all structure-sharing by movement). But the essential step of deciding that you can infer something about structure from function is critical, and not everyone is willing to make it, even now. (eg Geoff Pullum, as far as I can tell.)

      But all of these 100 comments on threads like this one are by the same handful of people - the vast majority of working linguists appear to be absent. So a few linguists care, but not many.

    25. You say: "But the essential step of deciding that you can infer something about structure from function is critical, and not everyone is willing to make it, even now. (eg Geoff Pullum, as far as I can tell.)"

      I would find it pretty surprising to learn that there are still people seriously questioning that you can infer something about structure from function - certainly i have never read Pullum claiming that - would you have a reference where he says something that could be interpreted that way?

      I also did not mean to say YOU have to read [or comment on] Yang's post. But he has done this kind of work for some time and similar work has been done by 'empiricists' for even longer. There is certainly some controversy about how the results of this work should be interpreted and it also emphasizes the BIO of biolinguistics. So it certainly could provide ample of stuff for discussion [which in my opinion does not only has to be based on 'having problems with Yang's work'] - or so i would have thought...

    26. I didn't say *I* had any problems with Yang's work (but I know there are people who do - I won't speculate about why). I'll try to dig up a specific Pullum reference, but anyone who has problems with the psychological reality of grammar rules (in general, not with respect to some specific proposal such as transformations or 'Move'), which iirc includes Pullum (and Gazdar) is denying the possibility of inferring something about structure from 'function' (I think I should have said 'behavior').

    27. This comment has been removed by the author.

    28. Christina BehmeMarch 14, 2013 at 4:55 PM
      I am looking forward to the Pullum reference. But your talk about the psychological reality of grammar rules confuses me. Chomsky has denied such reality explicitly back in 1995:

      "...languages have no rules in anything like the familiar sense, and no theoretically significant grammatical constructions except as taxonomic artifacts. There are universal principles and a finite array of options as to how they apply (parameters), but no language- particular rules and no grammatical constructions of the traditional sort within or across languages" (Chomsky, 1995, p. 5-6).

      I am not aware of Chomsky having re-introduced 'rules' after 1995 - are you? So if languages have no rules the non-existent rules can have no psychological reality and if Pullum denies psychological reality of grammar rules he is in agreement with Chomsky. If you mean 'psychological reality of language' I think you pretty much have to be a Platonist to deny that [though there is dispute about HOW MUCH DETAIL you can infer] and if you mean "psychological reality of cognitive structures implicated in linguistic knowledge", I think even Platonists accept that.

    29. I view parameters as a degenerate case of language-particular rules, where the specification is reduced to 'uyes'or 'no'. 'Merge' could also be regarded as a 'universal rule' aka 'principle'; there are a lot more possibilities in this space than people usually bother to investigate. For example lists of bounding nodes, which occasionally appeared in the GB period, are midway in complexity between traditional 'rules' and 'parameters'.

      While you're waiting for me to find a suitable Pullum selection (or realize that I've been internally misrepresenting him), Michael Devitt's "Linguistic is Not Psychology" (2003) I take to be a representative sample of evidence that there are still many people who don't buy any kind of behavior-to-structure inference. Perhaps I'm misreading this, and misunderstanding similar, earlier, things by Sterelny and Soames.

    30. Thanks I's appreciate if you could find a Pullum quote. Devitt is a philosopher so some here may discount what he says but Pullum certainly counts as linguist, and one of the best...

      Now when Chomsky writes in 1995 "languages have no rules in anything like the familiar sense" I would assume he must mean by 'familiar sense' what his audience has at this point become familiar with from his work; most notably his 1980 book 'Rules and Representations" where he writes:

      "The fundamental idea is that surface structures are formed through the interaction of at least two distinct types of rules: base rules, which generate abstract phrase structure representations and transformational rules, which move elements and otherwise rearrange structures to give surface structure" (Chomsky, 1980, p. 144)

      it would be too much to type out the explanations he gives but he specifically claims "the transformational mapping to S-structure can be reduced to (possibly multiple applications of] a single general rule: 'Move Alpha'" [ibid. p. 145]

      These where the rules Chomsky's audience had been familiar with in 1995. Given that in 1995 he also eliminated D-structure there certainly would be no use for a transformational rule that maps D-structure to S-structure. From the foregoing I would conclude that, whatever Merge is it cannot be regarded as a rule in Chomsky's terminology. that does not mean that you [and possibly a host of others] have applied the term 'rule' to Merge. Like for example: Baker, M. (2003). The Atoms of Language: The Mind's Hidden Rules of Grammar. Oxford: Oxford University Press.

      But at one point i have to ask: when people differ in very fundamental claims from what Chomsky himself has clearly and unambiguously said - based on what do they call themselves Chomskyan? This is a problem that does not just apply to the issues we discussed regarding apparent incoherence but to a host of other commitments.

    31. So, to claim to be a Chomskian you have to say what the dates are of the work you agree with and find helpful (1955-1971 would be mine for linguistics per se, but I like some of the more general stuff he said in the 80s as well). I think that convincing philosophers is in a way more important that convincing linguists, because linguists are more interested in finding patterns in linguistic data than anything else, and mostly don't worry about any deeper implications that might exist in what they do.

    32. I haven't managed to find a suitable Pullum quote (yet), but reviewing Pullum & Scholz on the infinitude claim was relevant, I think, because this article seems to be a somewhat confusing conflation of two issues:
      a) is it adequately motivated to describe any languages, such as English or Greek, as having an infinite number of sentences?
      b) must all languages have an infinite number of sentences?

      Apparently offering 'no' to both, as far as I can see, for reasons that I find persuasive for (b) but not for (a), and what interests me most about (a) is their discussion of the 'seductive' (sec 4.1) hold that the infinitude claim seems to hold on the minds of linguists. The obvious-to-me way to get the infinitude claim for English, German, Greek etc. is a behavior-to-structure inference: from the apparent forms of the behavior (e.g. my proton-creation chant) it seems plausible to hypothesize a mechanism in/aspect of structure of the mind that can produce structures of unbounded size (a mathematical property of the hypothesized mechanism), and it also seems plausible to identify what this thing 'ideally produces' as the language (English, German, etc), since the known limitations (boredom, mortality, all the Gamma-Ray-Bursts that we can expect from the upcoming collision with Andromeda, the heat-death of the universe, etc. etc.) are clearly extra-linguistic.

      P&S go off on various excursions, such as various ways of describing grammars that are 'non-generative' according to them, but not non-generative according to early Chomsky as I understand it, but the fact that they don't entertain the possibility that most linguists still buy the infinitude claim (for languages for which the usual kinds of arguments for it actually go thru) via the obvious behavior-to-form-of-mechanism inference suggests strongly to me that they don't accept this kind of argument.

      & they appear to have a lot of respectable philosophical company in this respect.

    33. Thank you for this. I am interpreting the 2010 Pullum&Scholz paper rather differently and we also seem to disagree on what generative grammars are [I base my opinion on what Paul Postal tells me and we probably can agree that he knows what he's talking about].

      I would like to direct your attention to this 2006 paper of Pullum & Rogers
      [ ] . They clearly state:

      "Our goal in this paper is to provide an introduction to some interesting proper subclasses of the finite-state class, with particular attention to their possible relevance to the problem of characterizing the capabilities of language-learning mechanisms. We survey a sequence of these classes, strictly increasing in language-theoretic complexity, discuss the characteristics of the classes both in terms of their formal properties and in terms of the kinds of cognitive capabilities they correspond to, and suggest contrasting patterns which could serve to distinguish the adjacent classes in language learning experiments." [p.1]

      This indicates to me the opposite of what you claim, namely that the kind of grammar an organism can learn allows us to draw conclusions about the learning mechanism that is used. It is also in line with the work on language acquisition/poverty of the stimulus argument Pullum & Scholz have done for a long time. The only thing I can imagine Pullum denies is that at this point we can have confidence what exactly the mechanism is that allows humans to acquire language [in this context also see Scholz, Barbara C. and Geoffrey K. Pullum (2006) Irrational nativist exuberance. In Robert Stainton (ed.), Contemporary Debates in Cognitive Science, 59-80. Oxford: Basil Blackwell. Now saying that currently we do not have enough evidence to rule out possible alternative to Chomsky's nativism is neither ruling out this kind of nativism nor claiming the evidence we can gather from 'function'/behaviour is irrelevant or eternally insufficient to draw conclusions about structure.

    34. Postal would know what he's talking about with generative grammar, but so do various other people and they don't necessarily say the same thing (I don't know what P might have told you). I tend to find Culicover & Jackendoff's (2005) views congenial.

      The MonkeyMath and Infinity claim papers seem contradictory to me, the former does seem to admit inference of structure from behavior, the latter to reject the most obvious one. Perhaps they got carried away in their campaign against overstated claims (which I generally approve of)?

    35. I think in most of his publications that are relevant to the topic Pullum "admits" that it is possible [in principle] to infer structure from behaviour. So far we just do not know enough to rule out that the internal structure is very different from the one suggested by Chomsky [as far as Chomsky's proposals are specific enough to be dis/confirmable] In fact the stronest claims challenging that we can infer structure from behaviour I am aware of are from Chomsky and McGilvray. And if we take their extreme internalism seriously it makes certainly sense to claim that studying behaviour in order to learn something about the nature of language is a interesting as 'recording what's going on outside the window' would be for a physicist.

    36. I have no problem with what you say right above, but the infinitude article still looks like an overreaction (to an arguably real problem) to me.

  10. Marc, it is less credible because your system "that is running on its own" has no obvious linguistic functionality without the external dimension. If we limit ourselves to recursive Merge, it looks like an abstract combinatorial capacity with an unknown range of applications (although applications other than language have been suggested in the literature). Language is an app!

    1. The external dimension is predicated on the internal, no? I know that you are inclined to argue that this has all been around before an individual was born, but the external dimension of those before us was also predicated on their internal dimension. We can pick up their output and learn it quickly because our internal dimension is not unlike theirs.

      In the end the upshot that FL is perhaps only an app may be correct. That is perhaps boring, but it seems like a simple explanation, and those typically are the correct ones. After all, who would have thought hundreds of years ago that the entire known universe is made out of just 17 particles. Although difficult for us to grasp, it's probably simple.

    2. Jan, I am not sure I understand this. I just do not see why you think it is so strange to think that indeed (externalized) language is an app running on the engine that is called Merge. Again, for me this would merely imply that most of the things are in the app, not in the engine. I don't mind. I believe that some people (Cedric Boeckx?) would say this means I am not a (bio)linguist, put a philologist. Again, I don't mind.

    3. Marc, not clear to me what you are saying. What you think I find strange is in fact what I have proposed myself: that language is an app based on something abstract like Merge. What I really find strange is to call this abstract engine FLN (faculty of language in the narrow sense). Ascribing an intrinsic cultural function to biological structures is what Gould and Lewontin (1970) called "the Panglossian fallacy", things like "the biological function of the nose is to support your glasses." The latter example we find ridiculous. I don't see why we should be milder about the opinion that Merge is FLN.

      I don't share your indifference about such issues. It's my feeling that extreme internalism obscures the true nature of language, which is a human invention of EXTERNAL means to support and share thought. Thanks to language, we humans live in symbiosis with a shared external memory. The current fashion of biologism in the humanities is downplaying the role of human agency and cooperation in history. If you don't believe me, google "bio-ethics" or, my favorite, "biotheology"!

    4. i essentially agree with Jan. Maybe a charitable way to avoid the ridiculous conclusion of the first paragraph, is taking the N in FLN very seriously and remembering that FLB plays an enormous role in our linguistic abilities. [Which would of course entail accounting for FLB when we try to come up with language evolution stories, etc.]

      Independently I find it quite ironic that regarding the second paragraph Chomsky and McGilvray propose the opposite: it is our innate endowment, not cooperation and external support that allows for fully creative human agency. Like Jan i find this biological determinism very difficult to accept. Even very watered down versions like Dennett's "Breaking the Spell: Religion as a Natural Phenomenon" [which one reviewer dubbed aptly 'The God Genome'] strike me as trying to read far too much into our biology...

    5. Jan, your argument that "ascribing an intrinsic cultural function to biological structures is what Gould and Lewontin (1970) called "the Panglossian fallacy"" presupposes that Merge (in Chomskyan sense) is assumed to be a cultural phenomenon. Although in itself your argument is perfectly valid, it doesn't apply here because generativists do not claim/believe that Merge is a cultural phenomenon. You would have to first prove that Merge is cultural and not individual/biological. And even if you can successfully do that, I'm sure no one in their right mind would disagree with the "Panglossian" argument.

    6. Given that Jan seems to object to biological determinism I read his comment as saying: reducing human language to the postulated biological 'phenomenon/structure' Merge is as absurd as claiming the shape of our noses has evolved so they could support glasses. You can of course disagree with that but I doubt Jan denies that on Chomsky's view Merge is biological.

    7. @ Seid Tvica You have misunderstood what I am saying. My point is that EVEN if recursive Merge is a biologically given structure (whatever that means), its APPLICATION to language is not (or at least not without further argument). In fact, there is an argument against it, namely that Merge only has a linguistic function thanks to a cultural record (a humanly invented collection of shared signs). Think of the lungs: no doubt an innate structure, but with a biological function (breathing) OR with a cultural function (when you are playing the trumpet). The issue is whether Merge is like the lungs in breathing (biolinguistics) or like the lungs in playing wind instruments (my alternative view). Innateness is a non-issue, as everything we do is constrained by our biology.

    8. Jan, thanks for you response. I agree that application of merge in `externalization' of language is for communicative reasons (i.e. cultural if you will), but such operation must be present at the conceptual level.

      If you conceive of an event, say "X verbs" you must be able to bring the agent X and action V in some relation internally. If not, then it's like saying you can have water by having two H atoms one side of the room and one O atom on the other. Well that's not water. So if it's not at the conceptual level, then how could you possibly externalize it? Assuming you agree, and perhaps entertain it if you don't mind :) , how is the internal relation between X and V different from the external one? I don't think it is, certainly not fundamentally.

    9. @Jan: interesting point. Now assuming you're right that the application of Merge to language is like the lungs' cultural function [trumpet playing] What would be the BIOLOGICAL function of Merge on your view? I can see a cultural function 'piggybacking' on a biological function [or better on an organ that evolved to perform a biological function] but not the other way around?

    10. Actually, I have some quibbles about Merge, but putting that aside for a while I can only speculate. First of all, a previous biological function is not strictly necessary, as it could be a spandrel (as has occasionally been suggested by Chomsky himself). But even if originally a spandrel, it could have had an earlier (or still existing other) function. I am thinking about general properties of working memory and consciousness. Typically, if we organize large chunks of information via our working memory, we do that by subdividing them hierarchically, into smaller parts. So, hierarchal organization might be reducible to something more general here. That, plus recursion. also seems part of the very fabric of our consciousness: for every perspective we are able to construct a meta-perspective. But again, we are pretty much in the dark here.

    11. This comment has been removed by the author.

    12. Thanks for the explanation. Merge as spandrel seems very unlikely [we should find it in other organisms too if that were the case, right?] But if you can picture it as having a more 'general purpose' biological function and not being narrowly domain specific there seems to be no strong argument against that possibility [though of course we are no longer talking Chomskyan Merge]. And of course the candidates you mention [while plausible] are those shared with other species. So on your view then it would be culture what turned the biological endowment [whatever it might be] into 'language'?

    13. @ Jan Koster: To “organize chunks of information via our working memory” sounds damn close to thinking. If so, you are on the same boat with Chomsky, after all.

  11. This comment has been removed by the author.

  12. @ Christina No, culture can't be enough. Merge, whether in the narrow linguistic sense or in the broader sense proposed above, can only do what it does when accessible to human agency --linguistic combinatory power is used as a tool after all. My guess is that the development of consciousness was another key factor.

    1. Right, consciousness would be needed. But, again, I think consciousness is shared with other species, so our pre linguistic ancestors [or language ready apes] probably already had that. So i was interested in the part that you think is just species specific. Does not really have to be 'one thing' - quite a few accounts assume that the difference between us and apes must be in brain organization but there's a lot debate about how much is 'inside the head' and how much is external. I think Chomsky is really quite unique in putting 'everything' in the head and treating 'externalization' as an afterthought. [I am only talking language evolution at the moment].

      And of course there are debates about what 'in the head' is the difference making difference. For example Mike Arbib tries to get a lot of 'work' out of mirror neurons and motor action. So i am mainly interested where in relation to other account you situate yourself?

    2. Note that I not only emphasized consciousness but particularly accessibility to human agency. This has to do with what Merlin Donald calls "autocuing": voluntary memory search and retrieval, found only in humans. See:
      Like Donald, I consider externalization of memory a key innovation. I clearly disagree with Chomsky in this respect.

    3. Thanks for the link. Given that for Arbib creative pantomime of previous actions also is an important stepping stone towards language [I seem to remember it was one of the first new abilities that was needed to get from language ready ape to protolanguage], I imagine the accounts might be quite compatible. And presumably what Mike Corballis calls the uniquely human capacity for mental time travel could be another term for 'voluntary memory search'? Of course these two [like most 'language-evolutionists'] are gradualists, meaning that they believe a host of gradual changes from LCA eventually lead to language in humans [so we would probably not find a 'unified' language organ' but distributed changes in seemingly unrelated parts of the brain].

      Would that work from your perspective or do you think there's something 'to' language that is so sharply different from the rest of animal cognition that a Chomsky-style mutation might be the only plausible explanation? I think one can be a lot less extreme than Chomsky's view [which i reject for a host of reasons] but still question that a host of small, random changes will get you all the way from LCA cognition to modern language.

    4. I don't want to get into a broad debate about what's unique about humans. Extreme gradualism in evolution is less self-evident these days than it used to be. Even a gradualist has to account for the fact that language appeared rather suddenly and recently in the record. That being said, I believe that a sudden Merge mutation would be insufficient. A non-linguistic zombie could "have" Merge, too, after all. The real decisive step, in my view, is the development of human agency, which includes consciousness and the ability to GIVE functions to things, without having to wait for slow natural selection. In the case of language, agency brought us the invention of the lexicon (a form of external, collective memory) and ACCESS to Merge, in order to use it as a tool to expand the inventory of basic lexical elements. WIthout agentive access, Merge is worthless for the purpose of language.

    5. To clear up a potential misunderstanding: I did not mean 'Merge' by Chomsky style mutation but merely an event that could have provided the difference making difference. I am so far not aware of any description of Merge that is detailed and consistent enough to evaluate exactly what supposedly has evolved. On the one hand Chomsky stresses how 'easy' it would have been for this very 'simple' mechanism to come into being. But then it is reliably just very vaguely described: "You got an operation that enables you to take mental objects [or concepts of some sort], already constructed, and make bigger mental objects out of them. That's Merge. (Chomsky, 2012, pp. 13). Well on Tomasello's account we have mechanisms that put things together and he tells me a lot more about what these things are. So I do not know what the simple Merge would add here. Now when i actually look at publications by syntacticians who try to account for linguistic phenomena using Merge suddenly the simplicity has vanished.I see operations at a complexity that remind me of the allegedly eliminated parameters of a gone by era. I am not enough of a syntactician to appreciate how Merge has simplified earlier accounts. But i am enough of a biologist to know that something that operates at such a level of complexity cannot have 'poofed' into existence via a single mutation that was immediately stable enough to spread through a population and never changed in the billion transmissions it had into every single human living right now. Norbert seems quite happy to invoke 'miracles' but as scientist i prefer the number of miracles required for an acceptable account to be zero.

      Regarding extreme gradualism: Dawkins had addressed this problem decades ago by using a neat analogy: if you read that a biblical tribe crossed a desert in 30 years and you know the size of the desert you can assume either [i] they traveled the entire time at a steady speed of 10feet/day. Or you assume [ii] they walked a few days, set up camp, stayed there some time, walked a few days set up a new camp, etc. In both cases you make gradual progress but no one would assume [i] was what they actually did. So even the most militant gradualist assumes that evolution is like [ii]: times with virtually no change and intervalls with +/- significant change. Saltationists like Chomsky on the other hand assume you sit 29 years and 364 days on one side of the desert and then at the last day you cross it by taking one giant leap. - Analogies are of course imperfect and sometimes we have saltations but even then they still need to be passed on to offspring etc.

      Also note that it is not entirely clear that a detectable change in technology [the 'sudden leap' in the archeological record] is a reliable indicator for an increase in overall intelligence and/or the arrival of linguistic abilities. By analogy, comparing the “archeological record” of human technology of the 17th and 20th century a scientist of the 44th century might conclude that our species underwent a dramatic increase in intelligence or acquired new linguistic abilities during this time period. But we have little reason to believe that such changes took place.

      Now you make an interesting point at the end. you say:
      The real decisive step, in my view, is the development of human agency, which includes consciousness and the ability to GIVE functions to things, without having to wait for slow natural selection. In the case of language, agency brought us the invention of the lexicon (a form of external, collective memory) and ACCESS to Merge, in order to use it as a tool to expand the inventory of basic lexical elements.

      So you see the invention of the lexicon as an entirely external process? And do you think it is stored exclusively or mostly in what you call 'external collective memory'?

    6. I agree that the archeological record is hard to interpret. As for the lexicon, that's practically the only thing we are sure of that it is unique to language. Bad news for biolinguistics because it is not an individual, internal state but an external, cultural collection. You can internalize such cultural objects, like songs, but that does not change their ontological status as cultural objects.

    7. Thanks for this. Now here is one problem: there still seems something internal that allows us but not say dogs the internalization of cultural objects. For most of our history dogs have been around and been exposed [especially in tribal societies] to a great deal of the same culture as kids growing up in these cultures. Yet dogs do not learn language. So something must be internal. From this one does not have to draw the crazy conclusion - that culture or even worse 'externalization' plays no role and everything of interest is 'in the head'- but clearly something is in the head.

      Also, by now it has been shown that dogs are capable of learning a considerable amount of words and seem to do so by 'fast mapping' which has been claimed to be unique to humans. At the moment there is way to less work done on the issue but it seems at least possible that dogs could acquire enough of lexicon to get to a 'toy language' - but there seems no sign yet that they actually do that. Again, far too little is known but my hunch is the difference making difference is in the brain. Do you think i am wrong about that? If so why?

  13. Cultural objects only exist as such in relation to a certain type of mind with its inner resources ("derived intentionality"). This is true for all external representations, from Lascaux to Rembrandt, and for all tools, from hammers to computers, etc. Far from being an afterthought, externalization was one of the key innovations that made us human. If you like computer metaphors: it turned us from isolated "machines" into members of a distributed network. See, among others, the papers and books by Merlin Donald, like:
    Dogs, and even chimps, don't create external representations whatsoever and therefore hardly have a mental life beyond their own skulls. I am afraid that Chomsky wants us to see ourselves like dogs and chimps in this respect!

    1. I certainly don't want to defend Chomskyan extreme internalism. On the other hand I think t is not quite correct to think other animals do not create any external representations. Look at the nest building excesses some birds go through to impress mates etc. Also some limited 'mental time traveling' seems possible that allows e.g., squirrels and jays to 'remember' where they cached food. But I agree all these examples are at best a very faint image of what we have done.

      My guess is probably some innate mental capacity [call it M] allowed us to go far beyond what other animals do in creating external representation ER. But then these ER fed back into M and you get some kind of self-enforcing cycle going. That could explain why there are really no sharp divides between human and non human cognition, yet still you get a trait like language that is unique in the animal kingdom. At least it does not sound immediately crazy to me...

    2. Nest building is not a good example of what we are talking about because it is instinctive behavior, under internal stimulus control. Human representations result from free, voluntary acts of creation.
      I wonder, at this point if it is appropriate to use Norbert's blog for this exchange. If you want to continue, please use email.