Tuesday, August 21, 2018

Language and cognition and evolang

Just back from vacation and here is a starter post on, of all things, Evolang (once again).  Frans de Waal has written a short and useful piece relevant to the continuity thesis (see here). It is useful for it makes two obvious points, and it is important because de Waal is the one making them. The two points are the following:

1.     Humans are the only linguistic species.
2.     Language is not the medium of thought for there are non-verbal organisms that think.

Let me say a few words about each.

de Waal is quite categorical about the each point. He is worth quoting here so that next time you hear someone droning on about how we are just like other animals just a little more so you can whip out this quote and flog the interlocutor with it mercilessly. 

You won’t often hear me say something like this, but I consider humans the only linguistic species. We honestly have no evidence for symbolic communication, equally rich and multifunctional as ours, outside our species. (3)

That’s right: nothing does language like humans do language, not even sorta kinda. This is just a fact and those that know something about apes (and de Waal knows everything about apes) are the first to understand this. And if one is interested in Evolang then this fact must form a boundary condition on whatever speculations are on offer. Or, to put this more crudely: assuming the continuity thesis disqualifies one from participating intelligently in the Evolang discussion. Period. End of story. And, sadly, given the current state of play, this is a point worth emphasizing again and again and again and… So thanks to de Waal for making it so plainly.

This said, De Waal goes on to make a second important point: that even if it is the case that no other animals have our linguistic capacities even sorta kinda, it does not mean that some of the capacities underlying language might not be shared with other animals. In other words, the distinction between a faculty for language in the narrow versus a faculty for language in the broad sense is a very useful one (I cannot recall right now who first proposed such a distinction, but whoever it was thx!). This, of course, cheers a modern minimalist’s heart cockles, and should be another important boundary condition on any Evolang account. 

That said, de Waal’s two specific linguistic examples are of limited use, at least wrt Evolang. The first is bees and monkeys who de Waal claims use “sequences that resemble a rudimentary syntax” (3). The second “most intriguing parallel” is the “referential signaling” of vervet monkey alarm calls. I doubt that these analogous capacities will shed much light on our peculiar linguistic capacities precisely because the properties of natural language words and natural language syntax are where humans are so distinctive. Human syntax is completely unlike bee or monkey syntax and it seems pretty clear that referential signaling, though it is one use to which we put language, is not a particularly deep property of our basic words/atoms (and yes I know that words are not atoms but, well, you know…). In fact, as Chomsky has persuasively argued IMO, Referentialism (the doctrine (see here)) does a piss poor job of describing how words actually function semantically within natural language. If this is right, then the fact that we and monkeys can both engage in referential signaling will not be of much use in understanding how words came to have the basic odd properties they seem to have. 

This, of course, does not detract from de Waal’s correct two observations above. We certainly do share capacities with other animals that contribute to how FL functions and we certainly are unique in our linguistic capacities. The two cases of similarity that de Waal cites, given that they are nothing like what we do, endorses the second point in spades (which, given the ethos of the times, is always worth doing).

Onto point deux. Cognition is possible without a natural language. FoLers are already familiar with Gallistel’s countless discussions of dead reckoning, foraging, and caching behavior in various animals. This is really amazing stuff and demands cognitive powers that dwarf ours (or at least mine: e.g. I can hardly remember where I put my keys, let alone where I might have hidden 500 different delicacies time stamped, location stamped, nutrition stamped and surveillance stamped). And they seem to do this without a natural language. Indeed, the de Waal piece has the nice feature of demonstrating that smart people with strong views can agree even if they have entirely different interests. De Waal cites none other than Jerry Fodor to second his correct observation that cognition is possible without natural language. Here’s Jerry from the Language of Thought:

‘The obvious (and I should have thought sufficient) refutation of the claim that natural languages are the medium of thought is that there are non-verbal organisms that think.’ (3)

Jerry never avoided kicking a stone when doing so was all a philosophical argument needed. At any rate, here Fodor and de Waal agree. 

But I suspect that there would be more fundamental disagreements down the road. Fodor, contra de Waal, was not that enthusiastic of the idea that we can think in pictures, or at least not think in pictures fundamentally. The reason is that pictures have little propositional structure and thinking, especially any degree of fancy thinking, requires propositional structures to get going. The old Kosslyn-Pylyshyn debate over imagery went over all of this, but the main line can be summed up by one of Lila Gleitman’s bon mots: a picture is worth a thousand words, and that is the problem. Pictures may be useful aids to thinking, but only if supplied with captions to guide the thinking. In and of themselves, pictures depict too much and hence are not good vehicles for logical linkage. And if this is so (and it is) then where there is cognition there may not be natural language, but there must be a language of thought (LOT) (i.e. something with propositional structure that licenses the inferences that is characteristic of cognitive expansiveness in a given domain). 

Again this is something that warms a minimalist heart (cockles and all). Recall the problem: find the minimal syntax to link the CI system and the AP system. CI is where language of thought lives. So, like de Waal, minimalists assume that there is quite a lot of cognition independent of natural language, which is why a syntax that links to it is doing something interesting.

Truth be told, we know relatively little about LOT and it properties, a lot less that we know about the properties of natural language syntax IMO. But regardless, de Waal and Fodor are right to insist that we not mix up the two. So don’t.

Ok, that’s enough for an inaugural post vaca post. I hope your last two weeks were as enjoyable as mine and that you are ready for the exciting pedagogical times ahead.

31 comments:

  1. Let me summarise an alternative sort of discontinuity hypothesis that you can find in a Biolinguistics paper I put out recently. The paper is quite dense and exploratory, but here's a simple story about theoretical parsimony.

    I grant that humans are the only linguistic species and I grant that thought is prior to language. More importantly, I also grant that words are special and syntax is special.

    Words are special because they are units of symbolic representation and we know of no other animal whose utterances are symbolic.

    Syntax is special because it aids the semantic composition of the meanings of words and we know of no other animal whose call syntax has a semantic function.

    (We can argue about whether every nonhuman syntax is only finite state in complexity but I don't think it matters. Show me a context free nonhuman syntax and you still have to prove that it performs a logical function on symbolic units for it to be constitutive of a linguistic capacity, and nobody dreams of arguing for this in nonhumans.)

    Now, seemingly contrary to the 'evolve Merge, get language' story, we have *two* points of human uniqueness here, both of which seem difficult to imagine in gradualist terms. Maybe one could suggest gradualism for symbolic representation but, nonetheless, we have to ask a tough question: is human linguistic ability predicated on two independent, extremely improbable evolutionary events? If so, the odds against that happening seem to me sufficient to cast doubt on the theory. We want a theory that has one special event *at most*.

    There are three ways we might collapse this into there being only one discontinuity:

    (1) symbolic representation and syntax are really the same thing;
    (2) only symbolic representation is new;
    (3) only syntax is new.

    Clearly, (2) and (3) go against the description just given of words *and* syntax being uniquely human, but let's put that on hold to look at (1), which is left, as it seems to me untenable.

    ReplyDelete
    Replies
    1. It should be uncontroversial that symbolic representation and compositional structure are two utterly different phenomena. If that's not clear, take a moment to consider that humans symbolically represent all manner of things that are not compositional (e.g. essentially all art) and we are already buying that compositionality is not necessarily linguistic if we're taking there to be a compositional LOT that is prior to natural language.

      If one were to try to argue that symbolism and compositionality are nonetheless dependent upon one another in human evolution, so as to eliminate the improbability of two independent miracles, one would have to say that attaining one somehow makes attaining the other extremely easy. In my opinion, there is no sense to be made at all of the idea that having a compositional LOT itself makes its own symbolic representation possible, nor can one say that being able to symbolise things gives one compositional thoughts. These are both non-sequiturs.

      At present, this difficulty is often brushed under the carpet in Minimalist evolang, or it is tacitly assumed (and shrugged off) either that symbolism is parasitic on Merge for totally unexplained reasons, or symbolism isn't really *that* special (with a fair bit of hand-waving). You can try to tell these stories if you like but I think it's reasonably evident that they're lazy, or the resort of not knowing any better alternatives. So, the short of it is this:

      EITHER we've got to say that there WERE two astronomically unlikely independent events in human evolution after all, OR only one of symbolic representation and Merge-based syntax is actually humanly special. Those are the only intelligent options and I'm inclined towards the latter. In particular, to state it briefly for the moment: why argue for the human uniqueness of Merge when we might say that other animals have Merge (or something sufficiently like it), with there being no manifest evidence for it simply because they cannot represent it like humans can?

      Well, what's a Merge-generated structure if *not* a manifestly observable linguistic structure? Let's get to that.

      You rightly point out that there has to be some sort of LOT that has a compositional structure, i.e. a structure that is remarkably like the structure of language, which is no surprise because we're supposing that the structure of language interfaces with the LOT. But if what we are saying is that the structure of language is a thing *distinct from* the structure of the LOT---which is indeed what we're saying when we say that the LOT is prior to language and that Merge arose only with the advent of language---then we must argue not only that Merge evolved but that there also evolved a suite of mechanisms, currently undescribed, for interpreting the structures of Merge in the structure of the LOT.

      Delete
    2. It sounds a bit redundant, but that's what's wrong with the theory. Recall the irrelevance of the complexity of nonhuman call systems. I can easily imagine a bird that produces call sequences with an identical structure to that of human language but that bird emphatically does *not* have a language faculty unless that structure is being used to compose meanings in a LOT. Language is structured sound in co-variance with structured meaning, not just structured sound.

      So, if the structure of language and the structure of the LOT are distinct things, there *must* also be mechanisms for translation between the two types of structure. If the structure of the language were *itself* sufficient for compositional interpretation, the bird we imagined would be impossible because its structured sound would get it the meaning we've just said it doesn't have. Now, if we're saying that we needed to evolve not only symbolic representation and Merge but also some undisclosed mechanisms for conceptually interpreting the structures generated by Merge, we're not doing so well with our modular minimisation of FL.

      So, I suggest we imagine a more minimal theory, where there is no interpretation of a Merge structure into a LOT structure because the Merge structure *is* the LOT. Of course, with the LOT being prior to language, the implication here is that Merge is *not* a linguistic computation; it's a conceptual one. Here, I'll anticipate and dissuade a likely reflection, which is that surely this is what Chomsky argues. That is, Chomsky often says that Merge evolved for concept composition first and then all the symbolic representation came later as a peripheral matter. Call it peripheral if you want, but what this amounts to is an acceptance that there must have been *two* miracles in human evolution: Merge and representation. The alternative I'm describing says that we lose nothing if we assume Merge to be there already, though we do need to reconfigure our understanding of what language/LOT syntax *is*. More in a moment.

      Let's return to the point I made before: if we presume Merge to be pre-linguistic and even instantiated in non-humans, we have no problem because without symbolic representation of whatever the LOT consists in, you're not going to see any of it. So, humans evolve symbolic representation as their *one* special feat and the structure we see uniquely in language is not unique *to* language but is rather uniquely represented *in* language, while nonetheless being entirely derivative of a structure that already exists in the LOT. You say, "truth be told, we know relatively little about LOT and its properties", I'd say, everything we've discovered that is universal to the syntactic structure of language is a window onto the LOT itself.

      To draw an analogy, when we presume that concepts pre-exist words, we don't suppose that with the origin of words, concepts must somehow have been reinvented. Why, then, if we presume that the LOT has a compositional structure that pre-exists language do we suppose that with the origin of language, that structure had to be reinvented with Merge, which then requires an interface to get back to the LOT?

      The answer to that question has to do with the historical fact that we presume Merge to be part of a two-interface (CI & AP) system, whereas the LOT and its structure is purely CI. In my opinion, this turns out to be an argument against the two-interface system. Although I of course need a new story about how the LOT gets a linguistic representation if it has no AP interface (and I think such a thing is possible), we are at present prevented from imagining such an alternative because our current theory is constructed on the T-model as an axiom, rather than as something we have demonstrated.

      In a slogan: my argument is essentially that generative theory is not a theory of language. It's a theory that takes language as its data to construct a theory of thought, though we haven't quite realised this as a field.

      Delete
    3. The argument is long and interesting. I even find it appealing. What do you take the relation between concepts and syntactic atoms to be? The reason I ask is that this is where I think that Pietroski has made interesting suggestions and I find his stuff persuasive wrt the distinction between what the syntax manipulates and what the terminals "fetch."

      Second, given the "ambiguities" (including open texture, context sensitivity, ambiguity) one finds in even structured NL expressions how exactly is it that the linguistic semantics of a linguistic expression just IS its LOT. I assume that in LOT there is a specific thought, not template for a thought when other matters are settled.

      That said, I am inclined to think you are onto something. Wanna spell it out more in a longer post for FoL?

      Delete
    4. I agree totally that Pietroski is on the ball with the relationship between concepts and syntactic atoms. A note of interest is that analogous revisions of the semantic properties of syntactic atoms can be found in the fringes of current relevance theoretic pragmatics, and in developments of Borer's XS syntax, which is also neo-Davidsonian. There's quite a bit of convergent evolution on this point, which I think will become more and more conspicuous.

      What's especially interesting to me is that, if syntactic atoms do not index concepts straightforwardly, then the situation is at least compatible with an architecture where the CI system has no interface with the AP system, either directly or via an autonomous syntax, which, traditionally, should generate unambiguous correspondences between AP and CI content (unambiguous to the generative system, that is).

      Now, I agree that in the LOT there are only specific thoughts, yet I'm also agreeing that linguistic expressions seem to give thought templates, so I surely can't be saying that the semantics of an expression is its LOT interpretation, though I can see I may have given that impression. What I actually have in mind requires us to think about what sorts of meanings it's possible for symbolic representations to carry. I'll save a full exposition but the case I'd make is that we can use PoS considerations to argue that symbolic representations, being arbitrary, cannot carry fixed meanings of any kind, so I would cast the non-conceptual meanings that Pietroski describes not as a different *sort* of meaning from LOT content, but rather as an *indeterminacy* in the relationship between expressions and LOT content, which gets resolved by contextual inferences. So, you get direct symbolic representations of the LOT, yet at the same time pervasive ambiguity because the representations aren't fixed.

      Implicitly, there's a fair bit I want to resurrect from Syntactic Structures, which, let's not forget, was not a double-interface model. Obviously, it's not compatible with a lot that we've learned, but I think we still have a lot to learn ourselves from SS. Anyway, it'd be great to spell it out some more on FoL. I won't beat people over the head with too much extra detail but a slightly different framing and a few references might be handy. I'll send you something?

      Delete
    5. Please do. Happy to post it.

      Delete
  2. @Callum: I'd be interested in seeing a post from you on this, too. The line of reasoning sounds interesting.

    For what it's worth, one step in the chain of reasoning that I'm a little skeptical of is the following:


    "EITHER we've got to say that there WERE two astronomically unlikely independent events in human evolution after all, OR only one of symbolic representation and Merge-based syntax is actually humanly special. Those are the only intelligent options and I'm inclined towards the latter."


    Do you have reason for being preferring the latter? I certainly don't have reason to prefer the first option. I'm just a bit skeptical of discounting it if the only reason for doing so is the fact that it'd be two "significant" qualitative changes, which is somehow intuitively "unlikely".

    I don't really know how to evaluate claims of "significance" or "unlikelihood" in this domain without more detailed reasons/argumentation.

    If you do have more thoughts on this, I'd love to see it in the upcoming post on FoL.

    And all of this is not to say that I'm not on board with preferring the second option as a working hypothesis. I'm just curious if you have more to say about this step in the chain of reasoning.

    Looking forward to reading your eventual post!

    ReplyDelete
    Replies
    1. Thanks, Adam. I take it from your comments that you don't doubt the choice that I think we have to make but rather my preference for the latter option you quote? If you suspect that I falsely ruled out certain other options (like symbolism and syntax being co-dependent), I'd be very interested to hear.

      But if the choice is adequately stated, I initially share your skepticism about arguments from intuited probabilities - the right answers to these sorts of questions have a nasty habit of looking terribly unlikely before we know what they are, and unavoidable after we've figured them out.

      However, the sort of likelihood I was thinking about is quite superficial: if you look at the nine million extant species, only one in nine million uses symbols and only one in nine million uses structural composition and both turn out to be humans.

      I think perhaps a trickier issue is not so much the likelihood, but whether it matters in some important sense. Fitch points out quite nicely that an awful lot of traits in the biosphere are unique (how many animals besides elephants have had trunks?) because they are the result of chance combinations of historical contingencies that could have been different.

      Our difficulty is that, if we identify the faculty of language as a specific capacity for syntactically composing symbols, then neither the symbols nor the structure is a contingency that we could have done without, so it seems we're saying that language was not the winning of just one lottery, it was the winning of two.

      Now, one might very well want to argue that language is just the sort of thing that requires two lottery wins, based on our best understanding of what linguistic ability is. All I'm saying is that if it is conceivable that we might account for the ability in a simpler way, we ought to try. Incidentally, I think that, ten or twenty years ago, it was perhaps not conceivable. But then imagine the inconceivability of the Minimalist program in 1980 - every so often we have to take stock of what we've learned and see if we can't ditch some of the modularities we've become used to.

      Still, theoretical simplicity is only a baseline consideration. I believe that when we do try to account for language by positing Merge as a recent evolutionary acquisition, what we end up with is a grammatical architecture that actually doesn't cohere very well, so it's as much on linguistic as biological grounds that I'd make the case for a pre-linguistic Merge. There, we go beyond issues of probability to ones of realisability, and that's what I'll address in the post!

      Delete
    2. I don't follow the statistical argument: You are assuming that only one species has undergone an adaptation (or mutation?) that led to symbol use, and you are counting nine million species as if they are examples of something that could have undergone that change but didn't. First of all, 85% of those species haven't been discovered yet, second of all fewer than a half a percent of them are vertebrates, third of all how would you know whether one of them had undergone that change?

      Delete
    3. You're right that it misrepresents the situation to suppose that every speciation event is a roll of a nine million-sided die and that's not quite what I meant by the analogy, my misleading use of the word 'lottery' notwithstanding. What I'm getting at is that symbolic behaviour appears to be generally inaccessible on the evolutionary landscape (so I'm talking in phenotypes, not mutations). Of course, you travel the landscape by gradual changes but what we're faced with is a history of life that has gone through nine million species' worth of tinkering and that has produced symbol use only once.

      I still state it categorically because I don't think our general ignorance of extant species is a cause for reticence - we know definitionally that we're not going to find it elsewhere unless we expect other human-like intelligences, and this is why its inaccessibility is a serious issue. The capacity to use symbols as we understand them is utterly dependent upon shared intentionality (i.e. recursive theory of mind), itself an engorgement of general social intelligence. Not only is social intelligence itself a rare product of natural selection, and pretty well accounted for, there is no suggestion, never mind evidence, that even other socially intelligent animals exhibit symbol use.

      All the indications are that, to acquire a symbolic capacity, a socially intelligent species has to occupy some peculiar evolutionary niche that we are yet to reconstruct, which spurs a very particular sort of cognitive development. There is a huge amount of chance involved in occupying that niche with all the right cognitive prerequisites, hence the trait is biologically improbable.

      I think it's important here that we don't treat symbolic behaviour too lightly, as what sounds like neutrality is, I think, cynicism. There are good reasons why the first archaeological evidence for symbols is taken as the first sign of civilization, so how skeptical are we willing to be about the uniqueness of human civilization given that 85% of species remain undiscovered? Indeed, all the same skeptical arguments could be given for the language faculty itself, so are we willing to be even-handed? Are we holding off on pronouncements of the uniqueness of human language in case of invertebrates?

      As I intimated before, I feel that, embedded in these discussions, there is an under-appreciation of the necessity and species-specificity of symbolic behaviour to language, as well as its logical independence of syntactic structure. As I mentioned, you could have a birdsong equal to natural language strings in structural complexity and yet totally without semantics and therefore totally non-linguistic. Where is the semantics coming from in natural language? It isn't put into the language by Merge - Merge just structures the things that already have meaning - you get it with the human capacity to represent things symbolically. This is something that has to be accounted for in grammatical theory.

      Delete
    4. I think the number nine million is irrelevant to the question of whether it is irresponsible to suggest that symbol use and merge are distinct abilities. Plants and algae and fungi are just not part of the equation. You could more germanely ask, in species whose social behavior is sophisticated enough that symbol use could be detected, does symbol use emerge? I don't know how many cases there are, maybe just one, maybe four, maybe a thousand, but not nine million.

      A different question would be, what is the neural correlate of symbol use? Suppose it's some kind of arrangement by which two systems normally interlinked are selectively dissociated. In humans, given all the other stuff, this manifests itself in symbol use. But maybe it manifests itself differently in other animals. For example puffer fish draw strange designs in the sand. Maybe they have the same mutation as the one that gives us symbol use. In that case, we really don't have a clue how common it is.

      Delete
    5. You're right that the focus on the nine million species may not be very helpful for the point I wanted to make, but over and above the question you pose, which I think is well-framed, I do think we need some way of being able to say that certain evolutionary developments appear to be generally inaccessible to the evolutionary process. For example, we can talk about the accessibility of vision, flight and echolocation because they have arisen through convergent evolution, while there has been no convergence on symbolic behaviour.

      On your second point, I could get on board if what you're saying is that we really don't have a clue how common the neural prerequisites for symbol use are but I'm not sure what light this sheds on the question, given how deeply various genetic and neurological homologies can be preserved across species that are phenotypically very different. Whether the prerequisites are common or rare, symbol use itself nonetheless remains unique and the best suggestion for why that is the case is because ancient hominids occupied a very special evolutionary niche. I think the rarity of that niche is in many way more important than the rarity of the neurology, though there is obviously an interplay to account for.

      Delete
  3. It seems to me that Fodor used non-human animals to motivate the existence of LoT as distinct from language, as you mention, but then when he discussed the nature of LoT he never made any further reference to animal cognition, basing all of the properties of LoT on evidence from human cognition, such as logical reasoning. For example:

    “Finally, and perhaps most important, I still think LOT 1 was right about the proper relation between logical theories and theories of reasoning. The essence of that relation is that mental representations have ‘logical form’ (‘logical syntax’, as one sometimes says). From this point of view, what matters about the thought that Granny left and Auntie stayed is its being a conjunction, which means (according to RTM) that the logical syntax of the mental representation that you token when you think that thought is conjunctive. That the logical syntax of the thought is conjunctive (partially) determines, on the one hand, its truth-conditions and its behavior in inference and, on the other hand, its causal/computational role in mental processes. I think that this bringing of logic and logical syntax together with a theory of mental processes is the foundation of our cognitive science; in particular, the main argument for a language of thought is that, very, very plausibly, only something that is language-like can have a logical form.” LOT 2: 21

    Now, do we actually have any evidence that non-humans think thoughts with this kind of structure, for example conjoined propositions in Fodor’s example, or that non-human animals can apply rules of logic to derive inferences? Couldn't it be, contra Fodor, that non-human thought is the way it is because of language, and non-linguistic thought is different? I once heard a talk that argued that cetaceans don't have a sense of object permanence, for example -- if they a fish they are chasing magically disappears through an experimenter’s trick, they aren’t surprised. I’m not saying I believe that’s an established result, just saying it is plausible that animals think rather differently from how we do.

    ReplyDelete
    Replies
    1. On Fodor, I agree that he wasn't really interested in nonhuman cognition and so I think we can take his references to it as largely rhetorical; he had separate arguments for the independence and priority of a LOT to natural language. Indeed, I don't think there have been any coherent counter-arguments in favour of human-like thought being *derivative* of linguistic ability, and such arguments in any case fly in the face of nativism.

      I imagine what you have in mind is rather something else, which I also understand Chomsky to have in mind, namely, that natural language and a similarly structured LOT are in some sense co-existent, neither being prior to the other. I suggested above that this is likely a redundancy and that there is not much sense in it because structure and its representation are subserved by completely different cognitive abilities. The position can only be maintained with an argument that symbolism and syntax are two sides of a coin, or that one leads to the other, and though this is often assumed (e.g. Why Only Us), I've never seen it substantiated.

      On the question of whether there is any evidence for nonhuman LOTs of the kind Fodor describes, Gallistel is much more qualified than me and he at least says that much of nonhuman cognition is properly regarded as having a predicate-argument structure, so that it is plausibly the communication of such structure (what Chomsky would call externalisation), and not the structure itself, that characterises human uniqueness. Gallistel has generally been taken as a friend to the generative cause and we might listen to him a little more on this point.

      Delete
    2. From the little I know about animals, they can be trained to be sensitive to disjunctions and conjunctions. So, yes they are sensitive to and will behaviorally react to logical structure. That non speakers can track logical forms (under training at least) is not controversial, I don't believe.

      Delete
    3. This comment has been removed by the author.

      Delete
    4. Can animals be trained to use symbols, too? Superficially, it would seem yes, but maybe this question has been addressed in a more sophisticated form.

      Delete
    5. To the best of my knowledge, no functional or logical symbol use has ever been demonstrated in nonhumans and this result is uncontroversial.

      More controversially, I don't believe it has ever been demonstrated that nonhumans can use contentful or lexical symbols either. There have been several famous sign language studies, and various great apes have been observed to use their own gestural systems in the wild, but all their 'symbol' use appears to be goal-oriented - that is, nonhumans point to and/or gesture signs through behaviouristic learning in order to get others to act in a desired way, *not* for the signs to represent anything.

      This is of course not what symbols are for humans. For us, symbols are not goal-oriented, they stand in some sort of representational relationship to concepts. The only reason why some ethologists have said that nonhumans can use symbols is because they implicitly believe that a behaviouristic account can be given of human symbols too, but Chomsky's review of Skinner should have put that notion to bed.

      To those who accept that there is no nonhuman symbol use, these results have sometimes been taken to mean that the nonhumans in question just don't have the concepts that they would otherwise readily represent, but I think this goes too far. All we know at this stage, I believe, is that nonhumans do not have a capacity to symbolically represent concepts that they may or may not have. The former position presumes that symbolic representation in some sense comes for free once you've got concepts to represent; I think that's very far from the truth.

      Delete
    6. I think much rests on what counts as "symbol use" here. It's pretty obvious that you can get conjunction effects in perception (e.g. the literature on feature integration effects). Here's an article ( https://www.ncbi.nlm.nih.gov/pubmed/?term=16957608 ) showing an MMN-like response in rats to deviants that contained only both a frequency difference AND an amplitude difference. And multi-sensory integration would seem to be simply impossible without notions of AND and OR.

      More generally, Vigo's Modal Similarity theory (e.g. https://www.ncbi.nlm.nih.gov/pubmed/?term=18626674 ) tries to cover the ground between perception and inference in a principled way.

      Of course this issue has been around quite a while. From Vigo & Allen:

      "The Stoic philosopher Chrysippus of Soli told of a hunting dog pursuing prey who, on coming to a crossroads, sniffed two of the roads leading away and immediately set off down the third without sniffing. Sorabji (1993) reports that Chrysippus did not take this to show that the dog really reasons, but only that it ‘‘virtually’’ goes through a syllogism: ‘‘The animal went either this way, or that way, or the other way. But not this way, or that way. So that way’’ (Sorabji 1993, p. 26). Medieval logicians referred to the binary choice version of this disjunctive syllogism as modus tollendo ponens (MTP), and it has occasionally been called the ‘‘rule of dogs’’"

      Delete
  4. I'm probably missing some important assumption here, but what's your theory of symbolic representation that gives us a demarcation criterion so we can decide that humans have it and other animals don't? In what sense wasn't Kanzi using symbolic representations with his lexigrams? Didn't Rico associate arbitrary human vocalizations with particular objects? If we just take a Peircean or Saussurean view of symbols, it would seem that they're rife, cf vervet calls, ring-tailed lemurs and others (which is why semiotics as a field of study is so loose around the edges). Do you have something else in mind? If other animals do have a capacity for symbolic representation, then the whole argument collapses doesn't it? (I take it what is at issue is capacity, not use - animals may have a capacity for symbols but not use them, which still means we'd share the relevant genetic specification with them).

    I might buy the story if it was cast in terms of a generative capacity to create symbols, like we see where Homesigners effortlessly create symbols (signs) for concepts they need, but then that seems much more like syntax.

    ReplyDelete
    Replies
    1. Hi David, I'm not sure if you're addressing Norbert or me, as I've been banging on about symbols myself, but I'll reply as I think these points are important. For my part at least, yes, if other animals use symbols, everything I've said collapses – I'm betting everything that's gone into my research not only that they don’t, but that it's demonstrable beyond doubt. It's a bet I could lose but those are the stakes! So, certainly, what I mean by symbols must be something different from what you mean.

      First, something that should be on the table is that the Peircean and Saussurean models are not the same and they cannot both be true of the same system at the same time. As it happens, I think we mislead ourselves by treating natural language symbols as Saussurean, as I think it’s impossible, and if we fully appreciated just how they are actually Peircean, we would already be clear that nonhuman communication cannot possibly be symbolic. But there are a lot of formal foundations to weaving that web that I don't want to presume here, so I'll try to go at a theory of symbols more informally.

      A useful starting place is to recognise that a *sign* is not the same thing as a *symbol*, so here’s a distinction I have in mind that you seem not to, though the distinction isn’t only mine - Peirce had his tri-partite division of signs into icons, indices and symbols, and though Saussure talked about signs, he was talking about symbols too, in that his theory does not apply to what modern evolutionary theory calls signs.

      One example of why a distinction is needed is avian courtship. Many birds have dances that are clearly meaningful for dancers and spectators ('I have good genes', 'I want to mate'), and they are thus evolved signs that stand for those meanings, but only in the sense that the dances behaviourally indicate *states* of mind which we can paraphrase; they are not intended or taken to represent mental *content*, which is partly what I take symbols to be.

      For an even clearer boundary case, we might look at the colours of poison dart frogs, which signify the frogs' poisonousness without them even being aware of it. If we're going to have a theory in which frog colours are treated as symbols, we can make that clear, but we'll be robbing ourselves of a term for signs that represent mental content and we'll have to invent a new one to keep talking about them. As such, I think the basis of this discussion needs to be that symbols are just one type of sign and we have to figure out what their cognitive foundations are without conflating them with other types.

      Now, assuming terminology so that frog colours and courtship dances are non-symbolic, it's well-established in the ethological literature that vervet calls and other 'referential' behaviours are not qualitatively distinct, they just happen to be acoustic rather than visual. Despite this, they are similarly functional and automatic, and not even intended to be recognised by others. That any lower primate signs are communicatively useful at all is a conspiracy of natural selection; the animals themselves don't act with intent to represent anything.

      Kanzi and Rico are different in that their signs were learned and not automatic, and thus plausibly representative of communicated mental content. What goes against this is that their signs were still unequivocally referential, as there was always either a connection between a sign and a particular object (never a general kind of object), or between a sign and a behavioural goal (like 'orange' being used always to get an orange, never to signify oranges in general). In other words, their sign systems could be thoroughly captured in terms of Skinnerian reinforcement, and it is exactly the applicability of a behaviouristic model that shows these signs didn’t point to the sort of mental content we’re interested in.

      Delete
    2. Thinking of Peirce again, all nonhuman signs are properly construed as indices, as they all have overt causal associations with the things they refer to, so we in fact already have a demarcation on hand that says they’re not symbolic. A sign is symbolic only when it is somehow grounded so that speakers can use it to represent mental content.

      Critically, as I suggested elsewhere, this can't be done without recursive theory of mind, because we can only agree on the mental content that an arbitrary sign should represent if we can recognise each other’s communicative intentions. While it often looks like other animals achieve this in sign language studies, the reality is that communicative intentions are sidelined by signs always having an indexical grounding to objects and behaviours; where we think we’re seeing symbols, we’re anthropomorphising. Even continuity theorists like Tomasello acknowledge that no great apes besides us have the sort of theory of mind that is needed for symbols, so there is already an evolutionary consensus that nonhumans don’t have the foundations for such a capacity, never mind the capacity itself.

      So, the crucial lesson for me in all this is that people shouldn’t be looking at sign use in other primates and thinking that there’s some meagre quantitative difference leading to human signs that biologists will explain, any more than we allow people to get away with continuity theories of syntax. Let’s be clear: human language is *not* vervet calls with added unbounded structure. As Norbert often points out, Chomsky has spent a lot of time talking about all nonhuman meaning being referential, while none of linguistic meaning is, and this isn’t a syntactic phenomenon; it’s because the kinds of signs we use are fundamentally different in what they represent, and the unique manner of their grounding requires unique cognitive abilities that linguists ought to take an enormous interest in.

      On your latter point, I think it could only be metaphorical to talk about a generative capacity for symbol creation. To my mind, there is one innate generative module and it is driven by Merge, it just generates thoughts rather than sentences, and having a proper semiotic theory will allow us to see how this is both possible and necessary. You say quite rightly that semiotics is loose around the edges but the reason for that, I feel, is not its intrinsic nature; it's because the theory of signs is awaiting the same sort of cognitive revolution that syntax underwent in the last century and, when it gets it, the two will integrate in a way that will push linguistics into exciting new territory.

      Delete
    3. Thanks Calum, I see where you're coming from. So my only thought here is that you say "their sign systems could be thoroughly captured in terms of Skinnerian reinforcement", and while that's perhaps true, it doesn't mean that that's the right way to capture them (I kind of believe it's not the right way to capture anything). So there's a logical lacuna in the argument, right? That is, we don't know that Kanzi doesn't have a human style symbolic system, limited by his other capacities. Perhaps he links the lexigrams to conceptual content and then uses these to active his limited goals? More importantly for the evolutionary issue, Kanzi may well have all of the components of human symbolization, but his other systems are configured so as not to be able to make use of them.

      I'm definitely not a fan on continuity. I don't even see how it's logically possible for these cases. But an alternative possibility is that it's the configuration of capacities that gives us humans a generative linguistic system, not the basic capacities themselves (this is the position that Peter and I sketched in our old Frontiers paper). So it's possible that the subcomponents of Merge and of Symbolization are present in other species, just differently configured. My own view is that even something like Merge is not atomic. It consists of computationally more basic operations (e.g. read, write, group, etc.) constrained by a particular organization (probably a memory organization). Each subcomponent may well be around in other species, or elsewhere in human cognition, but the way that those elements are configured is different. Perhaps, even, their computational memory isn't so restricted, so they are never forced into binarity and hierarchy. Then it would actually be a limitation of humans that ends up giving us a generative system. The same could be true for symbolization, which would, I guess, dissolve the worry you started with in the response to the OP. But where we'd agree is that more work needs to be done to get anywhere near that.

      Delete
    4. One quick thing is that I don't share your skepticism about Kanzi's symbolic abilities because what you're describing is specifically the possibility that Kanzi might have Saussurean symbols, and I think Saussure was wrong about what symbols are. This is the key issue because the grammatical architecture we assume to be assembled in humans, whose parts thereby inform your description of Kanzi, itself only works if symbols are Saussurean, and they're not. Evidently, I keep promising an alternative, so I'd better give it! To avoid repeating myself, it'll be in the post that'll go up some time soon.

      Delete
  5. I think there’s another piece of the picture that is getting lost in these discussions. One of the observations that launched Distributed Morphology (though DM is not the only contemporary framework capable of capturing this observation) is that syntactic atoms are not, generally speaking, the locus of phonological insertion or the locus of semantic insertion. Syntactic atoms combine to form insertion contexts for phonological content (“vocabulary items”) and for semantic content (“encyclopedia entries”), but these P-side and S-side insertion contexts need not align with one another, and neither needs to align with the granularity of a single syntactic atom. Indeed, whether this _ever_ happens anywhere in language (one syntactic atom mapping on to one vocabulary item and one encyclopedia entry) is very much an open question, I think.

    So, for example, “geese” is, conservatively speaking, three syntactic terminals (the root, n, and plural Num), it is probably a single P-side entity (=vocabulary item), and probably two S-side entities (GOOSE and NON-ATOMIC).

    “In cahoots” is, at the very least, four syntactic terminals (P, Num, n, and the root), three P-side entities (“in”, “cahoot”, “-s”, which combine systematically and according to the morphophonological rules of English, incl. devoicing of the final “-s” which is underlyingly /-z/), and is perhaps a single S-side entity.

    As I say above, it is not clear at all – to me at least – whether there truly are ever individual syntactic terminals that map onto individual P-side entities (vocabulary items) and individual S-side entities (encyclopedia entries). But even if there are, that is in some sense an accident: the insertion context for a given P-side entity might, by accident, align with the insertion context for a given S-side entity; and by further accident, the context in question might consist of a single syntactic atom; but that is a boundary case, not the defining characteristic of the system.

    I’d be more than willing to buy that humans _have_ individual P-side signs that correspond to individual S-side units (maybe something like “meh” or “pfffft”). But these are not generally the kind of entities that enter into the type of symbolic computation we are talking about when we’re talking about natural language syntax.

    From this perspective, all the talk about whether human language involves “symbols” or “signs” or “indices” is off the mark in the first place. Human language doesn’t work in any of these ways.

    ReplyDelete
    Replies
    1. Strange. It's precisely *because* natural language symbols require the kind of grounding that they require that there logically cannot be syntactic terminals that map onto individual P-side entities and S-side entities. DM treats this as a discovery but it is an insight we could have had fifty years ago if we'd developed a semiotic theory that actually made sense. What your comment does is criticise Saussurean significance, which I criticised myself. What it doesn't do is engage with the theory of signs in general, so your conclusion that talk of symbols is off the mark is itself off the mark.

      Delete
    2. My point is that, extra-syntactic “meh”s and “pffft”s aside, there is generally no alignment between anything on the meaning side and anything on the sound side that is not mediated by syntax. And not just by syntactic structure, mind you, but by the syntactic derivation. (Since the insertion contexts mentioned above need not be base-generated ones, but can instead be derived ones.) Now, perhaps there is a notion of “sign” that is broad or abstract enough to encompass this, but it’s not one I’m familiar with – and I’d venture to say, furthermore, that it would constitute a fairly unintuitive use of the term (given its non-technical connotations), and so perhaps some better terminology might be called for.

      Delete
    3. This comment has been removed by the author.

      Delete
    4. Thinking about it, I have to correct myself. Earlier, I suggested that because you repudiated the importance of individual syntactic terminals that map to individual P-side entities and S-side entities, you were therefore critiquing the Saussurean model of significance. But actually, if you're saying that there are direct sound-meaning correspondences, it's just that they involve derived structure rather than happening at the level of terminals, this is still Saussurean; your beef is that a theory of signs that includes syntax in the signs seems to go beyond what such theories are ordinarily about. Point taken. But then I would stand by my criticism of all Saussurean models, which now includes the one you're describing. I haven't developed in detail what I said was the Peircean alternative, but I'll do that in the post I said I'd write.

      Delete
    5. I don't see how what I'm saying can be understood as involving "direct sound-meaning correspondences." I gave a couple of examples where PF morphologizes one bit of structure, and LF interprets another bit of structure, and those two bits of structure are distinct (albeit, overlapping). I don't even see how one can talk about correspondences here, in either direction. What is the P-side correspondent of the S-side element NON-ATOMIC in the expression "geese"? What is the S-side correspondent of the P-side element "cahoot" in the expression "in cahoots"?

      I was saying (well, the people I was referring to are saying; I was just reproducing their arguments) that there is no correspondence between P-side and S-side entities at all, in the general case. There is only correspondence between syntax and PF, and between syntax and LF; but since the relevant bits of syntax need not be the same in both cases, there is no direct correspondence between morphemes and meanings.

      Delete
    6. There's certainly no direct correspondence between morphemes and meanings, which is why it is Saussureanism with syntax included. What you still have are direct correspondences of (structured) PF-LF pairs, in that there is no question left after derivation what the interpretations of 'geese' and 'in cahoots' are, regardless of how they are constructed. Or, to put it another way, any one syntactic derivation has a definite PF reading and a definite LF reading, so its function is to produce a PF-LF pair.

      You might say it's a reduced notion of direct correspondence, but a system genuinely free of direct correspondences would be one in which the syntax of 'geese', say, doesn't give it any interpretation at all because interpretation doesn't happen in the grammar. This isn't to say that syntax wouldn't be involved in interpretation, but that its involvement would perhaps be no more substantial than what the phonological syntax of d+o+g does for us as compared to c+h+i+e+n for French speakers.

      Delete