Monday, January 26, 2015

Words, things and EVOLANG (again)

In the evolang paper I linked to here, Chomsky mentions two basic features of natural language. The first is the distinctive nature of natural language (NL) “atoms” (roughly words or morphemes). The second is the generative procedure. He extensively discusses  an evolutionary scenario for the second, but only briefly mentions the first and says the following (1-2):

The atomic elements pose deep mysteries.  The minimal meaning-bearing elements of human languages – word-like, but not words -- are radically different from anything known in animal communication systems.  Their origin is entirely obscure, posing a very serious problem for the evolution of human cognitive capacities, language in particular (my emphasis NH).  There are insights about these topics tracing back to the pre-Socratics, developed further by prominent philosophers of the early modern scientific revolution and the Enlightenment, and further in more recent years, though they remain insufficiently explored.  In fact the problem, which is severe, is insufficiently recognized and understood.  Careful examination shows that widely held doctrines about the nature of these elements are untenable, crucially, the widely-held referentialist doctrine that words pick out extra-mental objects.  There is a great deal to say about these very important questions, but I’ll put them aside – noting again, however, that the problems posed for evolution of human cognition are severe, far more so than generally acknowledged.

What sort of problem do they pose? Well, as we see below, Chomsky argues that human linguistic atoms (roughly words) have qualitatively different properties from units/atoms used in animal communication systems. If he is right, then looking at the latter to inform us concerning properties of the former is a mugs game, not unlike looking for insight concerning the origins of hierarchical recursion one finds in NLs by looking at animal communication systems.

Furthermore, these differences if they exist (and they do, see below), are important for from the department of “you can’t make this s*!t up” comes research like this motivated by the idea that we can gain insight into human language by studying grunts.[1] How exactly?  Well, the motivating conceit is that a grunt language will help us “understand how languages are created –how linguistic conventions (e.g. words) come to be established” (3). And this is itself based on the belief that grunts are an appropriate comparison class for words (you can hear the ape calls reverberating in the background).

This assumption is a very crude version of what Lenneberg dubbed “the continuity theory of language development” (here p. 228): the idea that human language “must have descended from primitive animal forms of communication, and the study of the latter is likely to disclose” something about the former. It’s a short step from the continuity thesis to grunts. And it’s already been taken, as you can see.

So what’s wrong with this? Well lots really, but the main problem is that once again (see here to understand the “again”) there is no good description of what the evolved object’s basic properties (i.e. what “words” in NL are really like). There is no description of the object to be studied, no characterization of the evolved capacity. Why not?

I suspect that one reason is a tacit belief that we already know what meaning consists in: words denote things and meaning resides in this denotation relation between words and the objects they refer to. If this is right, then it is understandable why some might conclude that studying how grunts might refer to things in a given context would shed light on how meaningful NL words might have arisen in the species from earlier grunts.  

Chomsky (ahem) critically examines (‘excoriates’ might be a better term) this view, what he dubs the “Referentialist Doctrine” (RD), here. In what follows, I want to outline some of his arguments in service of another point Chomsky makes (one, btw, very reminiscent of the one that the late Wittgenstein makes in the first 100 or so entries of the Investigations), namely that though RD is far off the mark when it comes to NL words, it’s not a bad description of animal articulations.[2] If this is correct, then we have almost nothing to learn about NL words by studying what happens in animal communication. Why? Because the underlying assumption, the continuity thesis, is simply false for this domain of EVOLANG. Let’s consider the arguments.

First, what is RD? It’s the view that linguistic meaning originates in a basic capacity that words have (all kinds of words, not just nominals) of standing for mind independent objects. Thus the basic semantic relation is word to object. This basic semantic relation between words and things causally undergirds acts of denoting on the part of humans. Thus, the human capacity to speak about the world and to use language as a social tool (i.e. to communicate) supervenes on the semantic capacity that words have to denote mind independent things. Or put more succinctly: denotation licenses denoting. This is the position that Chomsky wants to discredit. He argues for the opposite view: that whatever sense we can make of denotation (and he hints that it is not much) piggy backs on acts of denoting, which are themselves heavily dependent on richly structured minds. Thus, the word-object-in-the-world relation is, at best, a derivative notion of little (if any) significance in understanding how NL words function.

How does Chomsky argue his position? First, he agrees that speakers do refer: “That acts of referring take place is uncontroversial.” However, he denies that this implies that acts of referring supervene on a more primitive denotation relation that holds between words and the things that acts of referring pick out. Or as Chomsky puts it, the fact that people refer using words

…leave[s] open the status of the relation of denotation; that is, the question whether there is a relevant relation between the internal symbol used to refer and some mind-independent entity that is picked out by the expression that is used to denote: an object, a thing, individuated without recourse to mental acts.

Or, put another way: that people denote using words is a fact. That this activity requires a primitive denotation relation between words and things “individuated without recourse to mental acts” is a theory to explain this fact, and the theory, Chomsky argues, is farfetched and generates unwelcome paradoxes.[3]

He deploys several lines of argument to reach this conclusion, the most interesting, IMO, being the analogy with the other side of words, their link to specific articulations (e.g. word-sound relation). The RD licenses the following analogy in the sound domain: Just as human acts of denoting supervene on the relation between an internal mental symbol (e.g. kitten) and real world kittens so too human word articulations rest on the relation between internal phonetic symbols (e.g. [ki’n] for kitten) and physical sounds in the world. However, it is clear in the word-sound case that this has things exactly backwards. As Chomsky puts it:

Take the word kitten and the corresponding phonetic symbol [ki’n], the latter an internal object, an element of I-language, in the mind.  We can carry out actions in which we use [ki’n] to produce a sound S (or counterparts in other modalities), the act of pronunciation.  The sound S is a specific event in the mind-independent world, and there is a derivative relation between the internal symbol [ki’n] and S insofar as we use [ki’n] to pronounce S.  There is however no relevant direct relation between [ki’n] and S, and it would be idle to try to construct some mind-independent entity to which [ki’n] corresponds even for a single individual, let alone a community of users.

Anyone who has tried to map spectrograms to words has a good feel for what Chomsky is talking about here. It’s hard enough to do this for a single individual on a single day in a single set of trials, let alone a bunch of people of different sizes, on different days and on different occasions. No two people (or one person on different occasions) seem to pronounce any word in the same way, if “same way” means “producing identical spectrograms.” But the latter are the measurable physical “things” out there in the world “individuated without recourse to mental acts”. If this is correct, then, as Chomsky says, “there is no relevant direct relation” between mental sound symbols and their physical products. There is at most an indirect relation, one mediated by an act of articulation (viz. this sound symbol was used to produce that sound).

The analogy leads to a conclusion that Chomsky draws:

Acoustic and articulatory phonetics are devoted to discovering how internal symbols provide ways to produce and interpret sounds, no simple task as we all know.  And there is no reason to suspect that it would be an easier task to discover how internal systems are used to talk or think about aspects of the world.  Quite the contrary.

IMO, this is a very powerful argument. If RD is good for meaning, then it should be good for sound. Conversely, if RD is a bad model for sound, why should we take it to be a good one for meaning? Inquiring minds want to know. To my knowledge, nobody has offered a good counter-argument to Chomsky’s word-sound analogy. However, I am not entirely up on this literature, so please feel free to correct me.

Chomsky also offers a host of more familiar arguments against RD. He points to the many paradoxes that Referentialism seems to generate. For example, If ‘London’ refers to the space-time located burgh then were it to move (like Venice did (see here for a more elaborate discussion of this same point)) will the meaning of London changed? And if so how can we say things like I visited London the year after it was destroyed by flood and it was rebuilt three miles inland.  Chomsky observes that this conundrum (and many others see note 4) disappear if RD is abandoned and meaning is treated the way we treat the word-sound relation; a secondary very indirect relation.[4]

Chomsky also hints at one reason why RD is still so popular, but here I may be putting words into his mouth (not unlike putting one’s head in a lion’s jaws, I should add). There is a long tradition linking RD to some rather crude kinds of associationism, wherein learning word meaning is based on, “ostention, instruction, and habit formation.” It is not hard to see how these operations rely on an RD picture; semantics becoming the poster child for environmental approaches to language acquisition (i.e. theories which take mental architecture as a faithful representation of environmental structure). The opposite conception, in which meaning is embedded in a rich innate system of concepts, is largely antithetical to this associationist picture. It strikes me as entirely plausible that RD derives some (much?) of its intuitive appeal by being yet another projection of the Empiricist conception of mind. If this is the case, then it is, IMO, another argument against RD.[5]

Chomsky ends his little paper with a nice observation. He notes that whereas RD provides a poor account of human word competence, it seems to describe what happens in animals pretty well.[6] Here’s what he says:

It appears to be the case that animal communication systems are based on a one-one relation between mind/brain processes and “an aspect of the environment to which these processes adapt the animal's behavior.” (Gallistell 1991).

This observation seconds one that Charles Hockett made a long time ago here (pp. 569ff). Hockett noted many differences between human language and animal communication systems. In particular, the latter are quite tightly tied to the here and now in ways that the former are not. Animals communicate about things proximate in place/time/desire. They “discuss” the four Fs (i.e. food, flight, fight, and sex) and virtually nothing else. Very few movie reviews it seems. Humans discuss everything, with the transmission of true (or even useful) information being a minor feature of our communicative proclivities if what I hear around me is any indication. At any rate, human words are far more open-textured than animal “words” and can be arbitrarily remote from the events and objects that they are used to depict. In other words, when we look we find that there is a strong discontinuity between the kind of semantic relations we find in animal communication systems and those we find in human language. And if this is so (as it strongly appears to be), then evolutionary accounts based on the assumption that human powers in these domains are continuous with those found in other animals are barking (grunting?) up the wrong bushes. If so, at least as regards our words and theirs and our combinatorics and theirs, continuity theories of evolution are very likely incorrect, a conclusion that Lenneberg came to about 50 years ago.

Let me end with two more observations.

First, as Chomsky’s two papers illustrate, linguists bring something very important to any EVOLANG discussion. We understand how complex language is. Indeed, it is so complex that language as such cannot really be the object of any serious study. In this way “language” is like “life.” Biologists don’t study life for it is too big and complex. They study things like energy production within the cell, how bodies remove waste, how nutrients are absorbed, how oxygen is delivered to cells, … Life is the sum of these particular studies. So too language. It is the name we give to a whole complex of things; syntactic structure, compositionality, phonological structure… To a linguist ‘language’ is just too vague to be an object of study.

And this has consequences for EVOLANG. We need to specify the linguistic feature under study in order to study its evolution. Chomsky has argued that once we do this we find that two salient features of language (viz. its combinatoric properties and how words operate) look unlike anything we find in other parts of the animal world. And if this is true, it strongly suggests that continuity accounts of these features are almost certain to be incorrect. The only option is to embrace the discontinuity and look for other kinds of accounts. Given the tight connection between continuity theses and mechanisms of Natural Selection, this suggests that for these features, Natural Selection will play a secondary explanatory role (if any at all).[7]

Second, it is time that linguists and philosophers examine the centrality of RD to actual linguistic semantic practice. Some have already started doing this (e.g. Paul Elbourne here and much of what Paul Pietroski has been writing for the last 10 years see e.g. here). At any rate, if you are like me in finding Chomsky’s criticisms of RD fairly persuasive, then it is time to decouple the empirical work in linguistic semantics from the referentialist verbiage that it often comes wrapped in. I suspect that large parts of the practice can be salvaged in more internalist terms. And if they cannot be, then that would be worth knowing for it would point to places where some version of RD might be correct. At any rate, simply assuming without argument that RD is correct is long past its sell-by date.

To end: Chomsky’s paper on RD fits nicely with his earlier paper on EVOLANG. Both are short. Both are readable. Both argue against widely held views. In other words, both are lots of fun. Read them.




[1] Thx to David Poeppel for bringing this gem to my attention. Remember my old adage that things not worth doing are not worth doing well? Well apply adage here.
[2] That words in NL are very different from what a natural reading of RD might suggest is not a new position, at least in the philo of language. It is well known that the Later Wittgenstein held this, but so did Waismann (here) (in his discussions of “open-texture”) and so did, most recently, Dummett. Within linguistics Hockett famously distinguished how human and animal “language” differs, many of his observations being pertinent here. I say a little about this below.
[3] Note, that a relatively standard objection to RD is that basing meaning on a reference relation “individuated without recourse to mental acts” makes it hard to link meaning to understanding (and, hence, to communication). Dummett presents a recent version of this critique, but it echoes earlier objections in e.g. the later Wittgenstein. In this regard, Chomsky’s is hardly an isolated voice, despite the many differences he has with these authors on other matters.
[4] Chomsky discusses other well known paradoxes. Pierre makes an appearance as does Paderewski and the ship of Theseus. Indeed, even the starship Enterprise’s transporter seems to walk on stage for a bit.
[5] It is also often claimed that without RD there can be no account of how language is used for communication. I have no idea what the connection between RD and communication is supposed to be, however. Dummett has some useful discussion on this issue, as does the late Wittgenstein (boy does this sound pretentious, but so be it). At any rate, the idea that the capacity to communicate relies on RD is a common one, and it is also one that Chomsky discusses briefly in his paper.
[6] It also describes Wittgentstein’s ‘slab’ language pretty well in the Investigations. Here Wittgenstein tries to create a language that closely mirrors RD doctrine. He tries to show how stilted an un-language like this slab language is. In other words, his point is that RD in action results in nothing we would recognize as natural language. Put another way, RD distorts the distinctive features of natural language words and makes it harder to understand how they do what they do.
[7] This is worth emphasizing: Natural Selection is one mechanism of evolution. There are others. It is a mechanism that seems to like long stretches of time to work its magic. And it is a mechanism that sees differences as differences in degree rather than differences in kind. This is why continuity theses favor Natural Selection style explanations of evolution.

25 comments:

  1. Chomsky makes the assertion that the word-like things humans use for communication are radically different from the word-like things non-human animals use for communication; hence there is a discontinuity between the atoms used by humans and those by animals, posing an evolutionary conundrum. However, as Chomsky as repeatedly stressed, communication appears to be a derivative function of language, whereas animal communication systems appear to be bona fide communication systems, given their remarkable adherence to the referentialist doctrine, which is clearly a feature of a well-designed communication system. So are we not comparing apples and oranges here? Perhaps this is part of Chomsky’s point, but this raises the question of whether animals have internal atoms, completely independent from the units of their communication systems, that are in fact continuous with the atoms of human language. The fact that humans use their atoms to refer is merely coincident with the fact that animals have their own referentialist communicative atoms; humans have figured out ways to hook up these mysterious cognitive objects to their communication systems, but this does not preclude the possibility that animals have them but are unable to communicate with them.

    ReplyDelete
    Replies
    1. Yes, but as this is the only evidence cited that they have "words" like we do, there is currently none that they do. Of course they might. It is logically possible. But no evidnce that they do. As continuity explanations generally look to these kimds of items, the absence of vidnce in this case makes the discussion moot. Possible? Yes. Likely? No evidnce at all.

      Delete
    2. I don't know that we don't have evidence to this effect. Take perception of other individuals, for instance - it seems that animals recognize other individuals as the same individual even when that individual's appearance has radically changed. This is unlike the vervet call, for instance - the call is elicited by particular environmental configurations that elicit the response, the referentialist doctrine as Chomsky says. The animal does not "figure out" that a predator leopard or snake is around when deciding to make a call, but animals do "figure out" that a particular individual is present and use this information to act accordingly. I feel this is interesting evidence possibly in favor of internalist representations akin to ours, although I am no ethologist to be sure.

      Delete
    3. Not sure I follow you here. The question is might animals have "words" like ours. So what's a word? Well, here's a first pass, an expression that has an articulation and it is linked to a concept. An obvious fit for this in the animal world might be the articulations animals use in communication. This assumption was in fact pursued by those interested in an evolang account for human language, as you know. Chomsky's point, I believe, is that IF this is what you had in mind it has none of the properties our words have. YOur point is that maybe there is seomthing else that corresponds to our words, but they are not used in communication. Maybe they have concepts like ours (the meaning side but not tied to an articulation). Well, that indeed is possible. Say they do. We now have an interesting question: why don't they link articulations to such concepts? After all they can articulate and they have the concepts, so…? We are also left with the question of what the relation is between words and the concepts they link to. The only person I know that has thought about this extensively is Paul Pietroski. He distinguishes lexemes from the concepts that they link to. If he is right, then even were animals to have concepts similar to ours they might still not have the wherewithal to have words similar to ours.

      I agree that all of this is pretty vague and I don't wish to sound more certain than I am (I'm very uncertain about these issues). What I think is interesting is that where things are clearish, there seems to be no continuity between what we see in the human case and the animal case. At the very least, this suggests that looking for explanations regarding our word like competence and what happens in animal communication systems is the wrong place to look. This is so even if there is another place to look for such an explanation.

      Delete
    4. OK, let me set up my argument in a simple way. Language (the computational system) is meaning with sound - the interface between these two domains. Lacking language, there is no interface system of this kind. It is therefore no surprise that organisms sans language lack links between concepts and sensory-motor representations.

      Chomsky makes the analogy that "words" are like phonemes - they are internal symbols that don't adhere to the referentialist doctrine. Animals certainly have things like phonemes - abstract sensory-motor representations used for internal computation. And it seems to me likely that they have concepts. So what they lack is a system to connect the two mental objects - i.e., they lack language, the computational interface between meaning and sound. Ergo, the only notable discontinuity lies within the computational system.

      Delete
    5. I like the argument, but it seems to me that one of the premises is clearly incorrect. It is possible to link a concept with an articulation in the absence of an FL. If not, getting dogs to sit when you say 'sit' would be pretty hard, not to mention those brilliant border collies we keep hearing so much about. In addition, the calls that have been studied that signal a predator and a predatory route seem on the surface to link concepts with articulations. So though syntax does link a meaning with an articulation, it dos not seem correct that ONLY a syntax could do this. Again, the capacity seems to exist to link these two in the absence of anything resembling a hierarchically structured syntax of the Merge variety (another example are Dolphins who did learn flat grammars of some complexity and "words' that could occupy variable positions in the strings they learned). Consequently, I still see these as two different features of language, not one. And here, I believe I am on the same page as the Great One himself. Chomsky never tires of schlepping out both problems, as he does again in the previously linked to paper, without suggesting that they are flipped sides of a common coin.

      If this is right, then it raises an interesting question: why if animals can link articulations with concepts do they not seem to link them the way we link them in NL words? Paul has one suggestion: because lexicalization is itself an operation mediating the link and lexicalization is species specific to humans. If so, this is a second "miracle" right next to the one that enabled the rise of the G system. I am partial to this view as it answers, or provides a framework for an answer, to the word puzzle that Chomsky keeps pointing to (not surprisingly as Paul's aim was to address these puzzles). The idea that words directly mediate a concept-articulation nexus seems to me hard to fit with what we know about NL words and articulations we find elsewhere. If you are right that animals have the same kinds of concepts we do (and I am very sympathetic to this) then it provides more conceptual backing for Paul's idea, I believe.

      At any rate, I agree that it would be great to reduce two miracles to one. However, I don't see the connection, nor does Chomsky, I believe. But a great try and I will keep thinking about it.

      Delete
    6. In terms of the vervet monkeys, I don't have the reference off the top of my head, but I do remember a study of vervet monkey calls which indicate that the calls are elicited by some internal physiological state, not the abstract concept of a predator. The evidence was that monkeys who were giving the call associated with "snake" in response to experimental environmental stimulation transitioned to giving the call associated with "leopard" when the stimulation was intensified with no qualitative change. So vervet calls are like cursing or crying for humans.

      But I agree that there is somehow a linkage between sensory data and concepts for animals. The question is this arbitrariness of use, where humans can say that a person is a horse, and animals can't do this. My intuition is that it's the computational system that does this - you can create the structure BOB IS HORSE, whereas the animal cannot do this, so they are stuck with the "literal" meaning of horse, meaning the referential doctrine. But at this point I'm just speculating.

      Delete
    7. I also smell a connection between the Wasimanesque features of NL words and the capacity to combine these atoms arbitrarily. However, I don't yet see the direct link (sorry). At any rate, I will leave the let word to you should you decide to have one. Thx for the discussion, I found it helpful for organizing my own thoughts.

      Delete
  2. This comment has been removed by a blog administrator.

    ReplyDelete
  3. This comment has been removed by a blog administrator.

    ReplyDelete
  4. A long section of Chomsky's 95 paper in Mind called Language and Nature deals with these arguments. Does this version have new arguments beyond the ones there? Or does it respond to the criticisms of that paper?
    I can't access it online for some reason.

    ReplyDelete
    Replies
    1. Not sure. What were these criticisms? I am unfamiliar with them.

      Delete
    2. The radical argument from ontology for example (the argument that there isn't anything called London, so there can't be a relation) has been attacked by a number of people, off the top of my head I can only think of Emma Borg, but Paul P knows the literature better than me.

      Delete
    3. In fact Borg's view is that the challenge from "London", to quote her 2012 paper, page 525: "remain[s] pressing and it is, I think, this debate about the different accounts of word/concept content which underlie the opposing approaches that will form the next key battleground for debates between radical semanticists and radical pragmatists." So even by her lights, it is not the case that here "minimalist semantics" is as yet well enough justified to undermine the concerns of the other Minimalists, regardless of who wins the battle at Ragnarok.

      Delete
    4. I *think* the paper I was thinking of was her "MUST A SEMANTIC MINIMALIST BE A SEMANTIC INTERNALIST?" from 2009 in the Proceedings of the Aristotelian Society, but glancing at it I see the critique is more directed at Stainton's reconstruction of Chomsky rather than Chomsky himself and also to Chomsky 2000 (New horizons..) rather than the 95 Mind paper.

      Delete
  5. One of those academic papers aggregate sites helpfully notified me of a recently uploaded paper, relevant to this post: Daniel Harris' "History and Prehistory of Natural Language Semantics" (http://danielwharris.com/HistoryPrehistory.pdf), which informs us that natural language semantics has in fact already moved on from the RD, sullen abstainers notwithstanding.

    I admit I've only skimmed the paper. The sense I get, though, is that we've apparently moved on not because of the sorts of worries that Chomsky (who is not cited) has voiced, but because natural language has so many types of non-declarative sentences. In what Harris calls the "communicative turn", what replaces (has replaced?) talk of truth conditions is talk of use conditions.

    Well, whatever has or hasn't happened, the net result is that we still get to avoid thinking about the nature of linguistic atoms and their meanings, how they (and combinations thereof) make contact with the rest of the mind, what might be humanly special about words, etc.

    ReplyDelete
  6. Sorry, but the claim that animal signifieds are "referential" in this sense is unfounded. This seems like an off the cuff argument meant to be thought provoking and nothing more. See Schlenker et al's paper on alarm calls.

    http://ling.auf.net/lingbuzz/001792

    The basic pieces of meaning are (1) non-aerial predator and (2) general alarm.

    (1) is "referential" in roughly the way bachelor is, namely, in that there are some mind external entities that satisfy the criteria, namely, not at all in the relevant sense. (2) is "referential" in exactly the same way "hello" is.

    ReplyDelete
    Replies
    1. I'm not sure that I get the point. Using Hockett's terms, these animal calls fail to display displacement: they only talk about the four Fs. They are tied to specific, though "general" features of the environment. Not everything is a predator after all. No alarm calls over berry bushes. That the call is tied only to perceived occurrences of the referent is very different form what occurs in humans.

      Last point: the whole conceit behind referential semantics is to extend to general terms the basic relation one finds in proper names. All semantic relations are between words and (various kinds of) things.

      Conclusion: I don't really see that this is just thought provoking. It is a feature of animal calls that has frequently been noted and it fits with the expanded view of reference that gets deployed in the motivational semantic literature. See CHomsky's paper. It starts with some quotes form the Chierchia McConnell-Ginet book that makes just this point.

      Delete
  7. I think one needs to be careful about claiming that animal communication fits referentialism extremely well. Much of what is studied under that heading concerns signalling at enemies or potential partners, or serves other social functions. The alarm call system of Vervet monkeys has become so famous exactly because it was the first case where biologists could demonstrate thoroughly that the calls were 'functionally referential' as they call it. I think that once you recognize that animal communication, modern language and presumably the earliest language of our ancestors fulfill many different functions (including referential functions, but also marking group membership, impressing with your fluency, etc etc), the argument against some sort of continuity between animal communication and human language kind of evaporates.

    I am also wondering whether the two unique features of natural language you mention, the nature of words and the combinatorial system, are really independent from each-other. In every discussion of the uniqueness of words that I have read (or at least those that I could understand), some reference is made to the fact that words participate in a combinatorial system (from the top of my head that is for instance true of Bloom, Jackendoff, Harnad, Deacon - I have a section on this in my chapter in The Language Phenomenon). To me it makes perfect sense that once the combinatorial system of language was in place, the nature of the atomic units that got combined by this system also had to change (i.e., the mental representations of a word now also needs to carry information about its role in the whole system [e.g., syntactic category membership] and information about how it relates to other words).

    ReplyDelete
    Replies
    1. I agree that one needs to be careful. However, from what I can see the sense of the literature more or less comports with Hockett's original observations. If there is more going on in the animal cases, it would be good to see examples. Given that so many people have been looking at this for a very long time, I would have hoped that there were some concrete cases to discuss.

      I also am not sure that I follow your link between use and structure. You can use something for what it can be used for. The latter related to structural features of, in this case, words. NL words have certain properties that permit them to be used in a wide variety of ways. What are these properties? Well Chomsky enumerates some, but we really don't have a clear account yet. Do animal "words" have the same properties? Well, we have no evidence that they do. So, if they don't this in part explains why they are used in the limited ways that they are (if this is indeed the case, as the current evidence suggests that it is). So, did our ape ancestors have NL like words or not? If they did not, then I am not sure that they could have served the different functions that you point to. At any rate, this is not the real issue: the question is what the difference is between these two kinds of words and how they arose. What is the nature of that evolution.

      I would really LIKE it to be the case that the two properties reduce to one. Really, really like it. My problem is that I don't see how to do this. I agree it makes perfect sense. So, details? I cannot find a way of relating the kinds of facts that Chomsky points to and the combinatory properties word display. The only story I know of that discusses these distinctive features in anything like a systematic way is Paul Pietroski's theory of lexicalization. And though I really love it, it does not follow from Merge or any other combinatory operation. So, my desires track yours, but my brain fails to see how my desires can be filled. A tragic situation which I would love to have someone remove me from.

      Delete
  8. I suspect these news from the language evolution front will be discussed soon as well?
    http://www.scientificamerican.com/podcast/episode/climate-influences-language-evolution/

    ReplyDelete
    Replies
    1. The results that you link to aren't about language evolution, at least not language evolution in the sense that Norbert has been blogging about.

      Chomsky makes this distinction in the paper that Norbert linked to in the first EVOLANG blog post (pp. 2-3): "Turning to evolution, we should first be clear about what it is that has evolved. It is, of course, not languages but rather the capacity for language, that is, UG. Languages change, but they do not evolve. It is unhelpful to suggest that languages have evolved by biological and non-biological evolution — James Hurford's term. The latter is not evolution at all. With these provisos in mind, I will use the conventional term 'evolution of language,' recognizing that it can be and sometimes is misleading."

      Of course, one can always stipulate one's own technical terms. So if one wants to call language change 'language evolution', that's fine. But then one needs to be very careful about distinguishing this from the evolution of human's capacity to do language. For the sake of making this clear, I'm sympathetic to Chomsky's point that one should simply avoid calling language change 'language evolution' and stick with the more transparent term 'language change'.

      Also, as a sidenote: it seems that the reporter, at least, makes a huge leap from correlation to causation. Who knows what the researchers themselves think, as scientific reporting is often not that great. But anyway, phonologists and historical linguists at least have a theory about how tonal languages arise: generally, they arise when a voicing contrast is lost. Vowels preceded by voiced consonants, for articulatory reasons, generally have a lower pitch, whereas vowels preceded by voiceless consonants have a higher pitch. So if a language loses a voicing contrast, subsequent generations might grammaticalize a tonal phonemic distinction. Most likely, then, the correlation that these researchers found arises from tonal languages developing in this way coupled with human migration patterns across history. Or something roughly like that.

      Delete
    2. I don't think there are many people that confuse the evolution of the capacity for language with the cultural change of languages, so this particular warning of Chomsky c.s. seems pretty unnecessary. Meanwhile, many people working seriously on topics under the heading 'evolution of language' are now recognizing that a process we like to call 'cultural evolution' might have played a major role in giving natural languages the properties that they have, and that this process shares so many characteristics with biological evolution that it makes sense to use the term 'evolution'. I agree that everyone is free to choose their own terminology, but since there is a lot of serious work out there -- Jim Hurford's included -- using these terms I don't think the terminology is going to be thrown out of the window just because Chomsky doesn't like it. A lot has happened since Hockett and Lenneberg, folks! I think it's great that Chomsky seems to be getting a little more serious about evolution, after decades of discouraging discussion of the topic, but he is a bit late to the game, and a revision of established terminology is, I think, not the kind of contribution we're waiting for.

      Delete
    3. I beg to differ. See, for example, Charles Yang's comment here:
      http://facultyoflanguage.blogspot.com/2015/01/how-to-make-evolang-argument.html?showComment=1421880087831#c8716839267255754430
      And your reply. It seems that we are running several different questions together. Chomsky and me want to know about the evolution of the CAPACITY. You seem interested in something else. I don't care what terms we use, but equivocation won't get anyone anywhere. So, use the terms as you wish, just don't run them together so as to confuse matters. It's clear what Chomsky is asking for and to date I see nothing anyone has said that undermines his point.

      Now, it's great to hear that lots has happened since Hockett and Lenneberg. But for some reason those in the know don't want to tell us what this is. I invite you AGAIN to outline a story that takes any feature of the capacity and gives it an evo account that you consider a good one. It will provide a good counterpoint to Chomsky's skepticism (and mine). Take a feature of the capacity and lay out a good story. You can have all the space you want. Please.

      Delete
    4. @Jelle Based on our conversation in Amsterdam last week, I think that we agree more than we disagree, and the disagreement appears to stem partly from terminology. Perhaps you (and I) could rehash some of the specific case studies we discussed? They are interesting problems that involve evolutionary dynamics--whether or not one wishes to see them as biological or cultural.

      Delete