Comments

Showing posts with label EVOLANG. Show all posts
Showing posts with label EVOLANG. Show all posts

Wednesday, October 10, 2018

Birds, all birds, and nothing but birds

I know, just when you thought it was ok to go back into the water. He’s back!! But rest assured this is a short one and I could not resist. It appears (see here) that biology has forsaken everything that our cognoscenti have taught us about evolution. We all know that it cannot be discontinuous. We all knowthat the continuity thesis is virtually conceptually necessary. We all knowthis because for years we have been told that the idea that linguistic facility in humans is based on something biologically distinctive that only humans have is as close to biologically incoherent as can be imagined. Anybody suggesting that that what we find in human languagemightbe biologically distinctive and unique is a biological illiterate. Impossible. Period. Creationism!

Well guess again. It seems that the bird voicebox, the syrinx, is biologically sui generis in the animal kingdom and “scientists have concluded that this voice box evolved only once, and that it represents a rare example of a true evolutionary novelty” (1). 

But surely they don’t mean ‘novelty’ when they say ‘novelty.’ Yup, that is exactly what they mean:

“It’s something that comes out of nothing,” says Denis Dubuole, a geneticist at the University of Geneva in Switzerland who was not involved with the work. “There is nothing that looks like a syrinx in any related animal groups in vertebrates. This is very bizarre.”

Now, as the little report indicates, true novelties are “hard to come by.” But, as the syrinx indicates, they are not conceptually impossible. It is biologically coherent to propose that these exist and that they can emerge. And that their distinctive properties are exactly what people like Chomsky have been suggesting is true of the recursive parts of FL (4).

They are innovations—new traits or new structures—that arise without any clear connections to existing traits or structures. 

Imagine that, no clear connections to other traits on other species or ancestors. Hmm. Are these guys really biologists? Probably not, or at least, not for long for very soon their credentials are sure to be revoked by the orthodox guardians of EvoLang. Save me! Save me! The discontinuitists are coming!

The report makes one more interesting observation: these kinds of qualitatively new innovations serve as interesting gateways for yet more innovation. Here, the development of the syrinx could have enabled songs to become more complex and biologists speculate that this might in turn have led to further speciation. In the language case, it is conceivable that the capacity for recursion in languageled to a capacity for recursion more generally in other cognitive domains. Think of arithmetic as a new song one can sing when hierarchical recursion has snuck in.  

Is all of this correct? Who knows? Today the claim is that the syrinx is a biological novelty. Tomorrow we might find out that it is less novel than currently advertised (recall for Minimalists, FL is unique but not thatunique. Just a teensy weensy bit unique). What is important is not whether it is unique, but the fact that biology and evolution and genetics have nothing against unique sui generic one of a kind features. They are rare, but not unheard of and not beyond the intellectual pale. That means that entertaining the possibility that something, say hierarchical recursion, is a unique cognitive capacity is not living out on the intellectual edge in evolutionary La-La land. It is a hypothesis and one that cannot be dismissed by assuming that this is not the way biology works or could work. It can so work and seems even to have done so on occasion. That means critics of the claim that language is a species specific capacity have to engage with the actual claims. Hand waving is simply dishonest (and you know who you are). 

Moreover, we know how to show that uniqueness claims are incorrect: just (ha!) show how to derive the properties of the assumed unique organ/capacity from more generic traits and show how the trait/organ under consideration could have continuously evolved from these using very itty bitty steps. Apparently, this was done for fingers and toes from fish fins. If you think that hierarchical recursion is “just more of the same” then find me the fins and show me the steps. If not, well, let’s just say, that the continuists have some work ahead of them (Lucy, you have some explaining to do) if they want to be taken seriously and that there is nothing biologically untoward or incoherent or wrong in assuming that sometimes, rarely but sometimes, novelties arise “without any clear connections to existing traits and structures.” And what better place to look for a discontinuity than in in language?

Let me end by adding two useful principles for future thinking on topics related to language and the mind:

1.     Chomsky is never (stupidly) wrong

2.     If you think that Chomsky is (stupidly) wrong go back to 1

Friday, June 16, 2017

Vapid and vacuous

It’s hard to be both vapid and vacuous (V&V), but some papers succeed. Here is an example. It is, of course, a paper on the evolution of language (evolang) and it is, of course, critical of the Chomsky-Berwick (and many others) approach to the problem. But the latter is not what makes it V&V. No, the combination of banality and emptiness starts from the main failing of many (most? all?) these evolang papers. It fails to specify the capacity the evolution of which it aims to explain. And this necessarily leads to a bad end. Fail to specify the question and nothing you say can be an answer. Or, if you have no idea what properties of what capacity you aim to explain, it should be no surprise that you fail to add anything of cognitive (vs phatic) content to the ongoing conversation.

This point is not a new one, even for me (see, for example, here). Nor should it be a controversial one. Nor, to repeat, does it require that you endorse Chomsky’s claims. It simply observes the bare minimum required to offer an evo account of anything. If you want to explain how X evolved then you need to specify X. And if X is “complex” then you need to specify each property whose evolution you are interested in. For example, if you are interested in the evolution of language, and by this I mean the capacity for language in humans, then you need to specify some properties of the capacity. And a good place to start is  to look at what linguists have been doing for about 60 years.

Why? Because we know a non trivial thing or two about human natural language. We know many things about the Gs (rules) that humans can acquire and something about the properties required to acquire such Gs (UG). We have discovered a large number of non-trivial “laws” of grammar. And given this, we can ask how a system with these laws, generating these Gs (might have) evolved. So, we can ask, as Chomsky does, how a capacity to acquire recursive Gs of the kind characteristic of natural language Gs (might have) evolved. Or we can ask how a G with these properties hooked up to articulation systems (which we can also describe in some detail) might have evolved. Or we can ask how the categorization system we find in natural language Gs (might have) evolved. We can ask these question in a non trivial, non vacuous non vapid way because we can specify (some of) the properties whose evolution we are interested in. We might not give satisfactory answers mind you. By and large the answers are less interesting than the questions right now. But we can at least frame a question. Absent a specification of the capacity of interest there is no question, only the appearance of one.

Given this, the first thing one does in reading an evolang paper is to looks for a specification of the capacity of interest. Note saying that one is interested in explaining the evolution of “language” without further specification of what “language” is and what capacities are implicated is not to give a specification. Unfortunately this is what generally happens in the evolang world. As evidence, witness the recent paper by Michael Corballis linked to above. 

It fails to specify a single property of language (more exactly the capacity for language for it is this, not language, whose evolution everyone is interested in) yet spends four pages talking about how it must have evolved gradually. What’s the it that has so evolved? Who knows! The paper is mum. We are told that whatever it is is communicatively efficacious (without saying what this means or might mean). We are told that language structure is a reflection of thought and not something with its own distinctive properties but we are not given a single example of what this might mean in concrete terms. We are told that “language derives” from mental properties like the “generative capacities to travel mentally in space and time and into the minds of others” without having a specification of the either the relevant generative procedures of these two purported cognitive faculties nor a discussion of how linguistic structures, whose properties we know a fair bit about, are simple reflections of these more general capacities. In other words, we are given nothing at all but windy assertions with nary a dollop of content. 

Let me fess up: I for one would love to see how theory of mind generates the structure of polar questions or island effects or structure dependency or c-command or anything at all of linguistic specificity. Ditto for the capacity for mental time travel. Actually, I’d love to see a specification of what these two capacities consists in. We know that people can think counterfactually (which is what this seems to amount to more or less) but we have no idea how this is done. It is a mystery how it is that people entertain counterfactual thoughts (i.e. what cognitive powers undergird this capacity) though it cannot be doubted that humans (and maybe other animals) do this. Of course unless we can specify what this capacity consists in (at least in part) we cannot ask if linguistic properties are simple reflections of these. So, virtually all of the claims to the effect that theory of mind (not much of a theory by the way as we have no idea how people travel into other minds either!) and time travel suffice to get us linguistic structures is empty verbiage. Let me repeat this: the claims are not false, they are EMPTY, VACUOUS, CONTENTLESS.   

And sadly, this is quite characteristic of the genre. Say what you will about Chomsky’s proposal it does have the virtue of specifying the capacity of interest. What he is interested in is how the generative capacity that give rise to certain kinds of structured arose and argues that given its formal properties it could not have arisen gradually. Recursion is an all or nothing property. You either got it or you don’t. So whenever it arose it did not do so in small steps, first 2-item structures, then 3, then 4, then unboundedly many. That’s not sensible, as I’ve mentioned more than a few times before (see, e.g. here and here). So Chomsky may be wrong about many things, but at least he can be wrong for he has a hypothesis which starts with a specified capacity. This is a very rare thing in the evolang world, it appears.

Actually, it’s worse than this. So rare is it that journals do not realize that absent such specifications papers purportedly dealing with the topic are empty. The Corballis paper appears in TiCS. Do the editors know that it is contentless? I doubt it. They think there is a raging “debate” and they want to be the venue where those interested in the “debate” go to be titillated (and maybe informed). But these is no debate because at least the majority of the discussants don’t say anything. The most that one can say of many contributions (the Corballis paper being one) is that they strongly express the opinion that Chomsky is wrong. That there is nothing behind this opinion, that it is merely phatic expression, is not something the editors have likely noticed.

The Corballis paper is worth looking at as an object lesson. For those that want more handholding through the vices, there is also a joint reply (here) by a gang of seven (overkill IMO) showing how there is no there there, and pointing out that, in addition, the paper seems unaware of much of modern evolutionary biology.  I cannot comment on the last point competently.[1] I can say that the reply is right in noting the Corballis paper “leave[s] the problem [regarding evolang, NH] exactly where it was, adding nothing” precisely because it fails to specify “the mechanisms of recursvie thought” in time travel or theory of mind and “how might lead to the feat that has to be explained” [i.e. how language with its distinctive properties might have arisen NH].

So can a paper be both vapid and vacuous? It appears that it can. For those interested in writing one, the Corballis paper provides a perfect model. If only it were an outlier!



[1] Though I can believe it. The paper cites Evans and Levinson, Tomasello and Everett as providing solid critiques of modern GG. This is sufficient evidence that the Corballis paper is not serious. As I’ve beaten all of these horses upside the head repeatedly, I will refrain from doing so again here. Suffice it to say, that approving citations of this work suffice by themselves to cast doubt on the seriousness of the paper citing it.

Monday, January 25, 2016

Three pieces to look at

I am waiting for links to David Poeppel’s three lectures and when I get them I will put some stuff up discussing them. As preview: THEY WERE GREAT!!! However, technical issues stand in the way of making them available right now and to give you something to do while you wait I have three pieces that you might want to peak at.

The first is a short article by Stephen Anderson (SA) (here). It’s on “language” behavior in non-humans. Much of it reviews the standard reasons for not assimilating what we do with what other “communicative” animals do. Many things communicate (indeed, perhaps everything does as SA states in the very first sentence) but only we do so using a system that of semantically arbitrary structured symbols (roughly words) that it combines to generate a discrete infinity of meanings (roughly syntax). SA calls this, following Hockett, the “Duality of Patterning” (5):

This refers to the fact that human languages are built on two essentially independent combinatory systems: phonology, and syntax. On the one hand, phonology describes the ways in which individually meaningless sounds are combined into meaningful units — words. And on the other, the quite distinct system of syntax specifies the ways in which words are combined to form phrases, clauses, and sentences.

Given Chomsky’s 60 year insistence on the centrality of hierarchical recursion and discrete infinity as the central characteristic of human linguistic capacity, the syntax side of this uniqueness is (or should be) well known. SA usefully highlights the importance of combinatoric phonology, something that Minimalists with their focus on the syntax to CI mapping may be tempted to slight. Chomsky, interestingly, has focused quite a lot on the mystery behind words, but he too has been impressed with their open textured “semantics” rather than their systematic AP combinatorics.[1] However, as SA notes, the latter is really quite important.

It is tempting to see the presence of phonology as simply an ornament, an inessential elaboration of the way basic meaningful units are formed. This would be a mistake, however: it is phonology that makes it possible for speakers of a language to expand its vocabulary at will and without effective limit. If every new word had to be constructed in such a way as to make it holistically distinct from all others, our capacity to remember, deploy and recognize an inventory of such signs would be severely limited, to something like a few hundred. As it is, however, a new word is constructed as simply a new combination of the inventory of familiar basic sound types, built up according to the regularities of the language’s phonology. This is what enables us to extend the language’s lexicon as new concepts and conditions require. (5)

So our linguistic atoms are peculiar not only semantically but phonetically as well. This is worth keeping in mind in Evolang speculations.

So, SA reviews some of basic ways that we differ from them when we communicate. It also ends with a critique of the tendency to semanticize (romanticize the semantics of) animal vocalizations. SA argues that this is a big mistake and that there is really no reason to think that animal calls have any interesting semantic features, at least if we mean by this that they are “proto” words. I agree with SA here. However, whether I do or not, if SA is correct, then it is important for there is a strong temptation (and tendency) to latch onto things like monkey calls as the first steps towards “language.” In other words, it is the first refuge of those enthralled by the “continuity” thesis (see here). It is thus nice to have a considered take down of the first part of this slippery slope.

There’s more in this nice compact little paper. It would even make a nice piece for a course that touches on these topics. So take a look.

The second paper is on theory refutation in science (here). It addresses the question of how ideas that we take to be wrong are scientifically weeded out. The standard account is that experiments are the disposal mechanism. This essay, based on the longer book that the author, Thomas Levenson has written (see here), argues that this is a bad oversimplification. The book is a great read, but the main point is well expressed here. It explains how long it took to loose the idea that Vulcan (you know Mr Spock’s birthplace) exists. Apparently, it took Einstein to kill the idea. Why did it take so long? Because, that Vulcan existed was a good idea that fit well with Newton’s ideas and that it experiment had a hard time disproving. Why? Because small modification of good theories are almost always able meet experimental challenges, and when there is nothing better on offer, such small modifications of the familiar are reasonable alternatives to dumping successful accounts. So, naive falsificationism (the favorite methodological stance of the hard headed, non nonsense scientist) rails to describe actual practice, at least in serious area of inquiry.

The last paper is by David Deutsch (here). The piece is a critical assessment of “artificial general intelligence” (AGI). The argument is that we are very far from understanding how thought works and that the contrary optimism that we hear from the CS community (the current leaders being the Bayesians) is based on an inductivist fallacy. Here’s the main critical point:

[I]t is simply not true that knowledge comes from extrapolating repeated observations. Nor is it true that ‘the future is like the past’, in any sense that one could detect in advance without already knowing the explanation. The future is actually unlike the past in most ways. Of course, given changes’ in the earlier pattern of 19s are straightforwardly understood as being due to an invariant underlying pattern or law. But the explanation always comes first. Without that, any continuation of any sequence constitutes ‘the same thing happening again’ under some explanation.

Note, the last sentence is the old observation about the vacuity of citing “similarity” as an inductive mechanism. Any two things are similar in some way. And that is the problem. That this has been repeatedly noted seems to have had little effect. Again and again the idea that induction based on similarity is the engine that gets us to generalizations we want keeps cropping up.  Deutsch notes that is still true with our most modern thinkers on the topic.

Currently one of the most influential versions of the ‘induction’ approach to AGI (and to the philosophy of science) is Bayesianism, unfairly named after the 18th-century mathematician Thomas Bayes, who was quite innocent of the mistake. The doctrine assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act. … As I argued above, that behaviourist, input-output model is appropriate for most computer programming other than AGI, but hopeless for AGI. It is ironic that mainstream psychology has largely renounced behaviourism, which has been recognised as both inadequate and inhuman, while computer science, thanks to philosophical misconceptions such as inductivism, still intends to manufacture human-type cognition on essentially behaviourist lines.

The only thing that Deutsch gets wrong in the above is the idea that main stream psych has gotten rid of its inductive bias. If only!

The piece is a challenge. I am not really fond of the way it is written. However, the basic point it makes is on the mark. There are serious limits to inductivism and the assumption that we are on the cusp of “solving” the problem is deserving of serious criticism.

So three easy pieces to keep you busy. Have fun.



[1] I put ‘semantics’ in scare quotes because of Chomsky does not think much of the idea that meaning has much to do with reference. See here and here for some discussion.

Wednesday, April 29, 2015

Shigeru Miyagawa Vitor Nóbrega comment on the previous post

Thanks to Shigeru and Vitor for taking the time to elaborate on the points they make in their paper.

***

Dear Norbert,

Thanks for taking up our paper in your blog (Nóbrega and Miyagawa, 2015, Frontiers in Psychology). We are glad that you appreciate our arguments against the gradualist approach to language evolution. There are two things that don't come out in your blog that we want to note.

First, our arguments against the gradualist view are predicted by the Integration Hypothesis, which Miyagawa proposed with colleagues in earlier Frontiers articles (Miyagawa et al. 2013, 2014). The gradualists such as Progovac and Jackendoff claim that compounds such as doghouse and daredevil are living fossils of an earlier stage in language, which they call protolanguage. The reason is that the two "words" are combined without structure, due to the fact that these compounds (i) have varied semantic interpretations (NN compounds), and (ii) are unproductive and not recursive (VN compounds). We argued that if one looks beyond these few examples, we find plenty of similar compounds that are fully productive and recursive, such as those in Romance and Bantu. These productive forms show that the members that make up the compound are not bare roots, but are "words" in the sense that they are associated with grammatical features of category and sometimes even case.

This is precisely what the Integration Hypothesis (IH) predicts. IH proposes that the structure found in modern language arose from the integration of two pre-adapted systems. One is the Lexical system, found in monkeys, for example. The defining characteristic of the L-system is that it is composed of isolated symbols, verbal or gestural, that have some reference in the real world. The symbols do not combine. The other is the Expressive system found in birdsong. The E-system is a series of well-defined, finite state song patterns, each song without specific meaning. For instance, the nightingale may sing up to 200 different songs to express a limited range of intentions such as the desire to mate. The E-system is akin to human language grammatical features. These are the two major systems found in nature that underlie communication. IH proposes that these two systems integrated uniquely in humans to give rise to human language.

Based on the nature of these two systems, IH predicts that the members of the L-system do not combine directly, since that is a defining characteristic of the L-system. E must mediate any such combination. This is why the IH predicts that there can't be compounds of the form L-L, but instead, IH predicts L-E-L. Such an assumption bears a close relation to how human language roots are ontologically defined, as feature-less syntactic objects. Once roots are feature-less they are invisible to the generative system, thus there is no motivation a priori to assume that syntax merges two bare roots, that is, two syntactically invisible objects. 

The second point is that the L-system is related to such verbal behavior as the alarm calls of Vervet monkeys. We focus on the fact that these calls are isolated symbols, each with reference to something in the real world (thus, they are closer to concepts rather than to full-blown propositions). You question the correlation by noting that while the elements in a monkey's alarm calls appear purely to be referential, words in human language are more complex, a point also Chomsky makes. We also accept this difference, but separate from this, roots and alarm calls share the property, if we are right, that they are isolated elements that do not directly combine. This is the property we key in on in drawing a correlation between roots and alarm calls as belonging to the L-system. In addition to the referential aspect of alarm calls, there is another important question to solve: what paved the way to the emergence of the open-vocabulary stored in our long-term memory, since alarm calls are very restricted? Perhaps what you’ve mentioned as “something ‘special’ about lexicalization”, that is, the effect that Merge had on the pre-existing L-system, may have played a role in the characterization of human language roots, allowing the proliferation of a great number of roots in modern language. Nevertheless, we will only get a satisfactory answer to this question when we have a better understanding of the nature of human language roots.

Finally, you might be interested to know that Nature just put up a program on primate communication and human language on Nature Podcast in which Chomsky and Miyagawa are the linguists interviewed.
Also, BBC will be broadcasting a Radio 4 program on May 11 (GBST) about evolution of language that will in part take up the Integration Hypothesis (or so, Miyagawa was told).



Shigeru Miyagawa
Vitor Nóbrega

April 29, 2015