I am waiting for links to David Poeppel’s three lectures and when I get them I will put some stuff up discussing them. As preview: THEY WERE GREAT!!! However, technical issues stand in the way of making them available right now and to give you something to do while you wait I have three pieces that you might want to peak at.
The first is a short article by Stephen Anderson (SA) (here). It’s on “language” behavior in non-humans. Much of it reviews the standard reasons for not assimilating what we do with what other “communicative” animals do. Many things communicate (indeed, perhaps everything does as SA states in the very first sentence) but only we do so using a system that of semantically arbitrary structured symbols (roughly words) that it combines to generate a discrete infinity of meanings (roughly syntax). SA calls this, following Hockett, the “Duality of Patterning” (5):
This refers to the fact that human languages are built on two essentially independent combinatory systems: phonology, and syntax. On the one hand, phonology describes the ways in which individually meaningless sounds are combined into meaningful units — words. And on the other, the quite distinct system of syntax specifies the ways in which words are combined to form phrases, clauses, and sentences.
Given Chomsky’s 60 year insistence on the centrality of hierarchical recursion and discrete infinity as the central characteristic of human linguistic capacity, the syntax side of this uniqueness is (or should be) well known. SA usefully highlights the importance of combinatoric phonology, something that Minimalists with their focus on the syntax to CI mapping may be tempted to slight. Chomsky, interestingly, has focused quite a lot on the mystery behind words, but he too has been impressed with their open textured “semantics” rather than their systematic AP combinatorics. However, as SA notes, the latter is really quite important.
It is tempting to see the presence of phonology as simply an ornament, an inessential elaboration of the way basic meaningful units are formed. This would be a mistake, however: it is phonology that makes it possible for speakers of a language to expand its vocabulary at will and without effective limit. If every new word had to be constructed in such a way as to make it holistically distinct from all others, our capacity to remember, deploy and recognize an inventory of such signs would be severely limited, to something like a few hundred. As it is, however, a new word is constructed as simply a new combination of the inventory of familiar basic sound types, built up according to the regularities of the language’s phonology. This is what enables us to extend the language’s lexicon as new concepts and conditions require. (5)
So our linguistic atoms are peculiar not only semantically but phonetically as well. This is worth keeping in mind in Evolang speculations.
So, SA reviews some of basic ways that we differ from them when we communicate. It also ends with a critique of the tendency to semanticize (romanticize the semantics of) animal vocalizations. SA argues that this is a big mistake and that there is really no reason to think that animal calls have any interesting semantic features, at least if we mean by this that they are “proto” words. I agree with SA here. However, whether I do or not, if SA is correct, then it is important for there is a strong temptation (and tendency) to latch onto things like monkey calls as the first steps towards “language.” In other words, it is the first refuge of those enthralled by the “continuity” thesis (see here). It is thus nice to have a considered take down of the first part of this slippery slope.
There’s more in this nice compact little paper. It would even make a nice piece for a course that touches on these topics. So take a look.
The second paper is on theory refutation in science (here). It addresses the question of how ideas that we take to be wrong are scientifically weeded out. The standard account is that experiments are the disposal mechanism. This essay, based on the longer book that the author, Thomas Levenson has written (see here), argues that this is a bad oversimplification. The book is a great read, but the main point is well expressed here. It explains how long it took to loose the idea that Vulcan (you know Mr Spock’s birthplace) exists. Apparently, it took Einstein to kill the idea. Why did it take so long? Because, that Vulcan existed was a good idea that fit well with Newton’s ideas and that it experiment had a hard time disproving. Why? Because small modification of good theories are almost always able meet experimental challenges, and when there is nothing better on offer, such small modifications of the familiar are reasonable alternatives to dumping successful accounts. So, naive falsificationism (the favorite methodological stance of the hard headed, non nonsense scientist) rails to describe actual practice, at least in serious area of inquiry.
The last paper is by David Deutsch (here). The piece is a critical assessment of “artificial general intelligence” (AGI). The argument is that we are very far from understanding how thought works and that the contrary optimism that we hear from the CS community (the current leaders being the Bayesians) is based on an inductivist fallacy. Here’s the main critical point:
[I]t is simply not true that knowledge comes from extrapolating repeated observations. Nor is it true that ‘the future is like the past’, in any sense that one could detect in advance without already knowing the explanation. The future is actually unlike the past in most ways. Of course, given changes’ in the earlier pattern of 19s are straightforwardly understood as being due to an invariant underlying pattern or law. But the explanation always comes first. Without that, any continuation of any sequence constitutes ‘the same thing happening again’ under some explanation.
Note, the last sentence is the old observation about the vacuity of citing “similarity” as an inductive mechanism. Any two things are similar in some way. And that is the problem. That this has been repeatedly noted seems to have had little effect. Again and again the idea that induction based on similarity is the engine that gets us to generalizations we want keeps cropping up. Deutsch notes that is still true with our most modern thinkers on the topic.
Currently one of the most influential versions of the ‘induction’ approach to AGI (and to the philosophy of science) is Bayesianism, unfairly named after the 18th-century mathematician Thomas Bayes, who was quite innocent of the mistake. The doctrine assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act. … As I argued above, that behaviourist, input-output model is appropriate for most computer programming other than AGI, but hopeless for AGI. It is ironic that mainstream psychology has largely renounced behaviourism, which has been recognised as both inadequate and inhuman, while computer science, thanks to philosophical misconceptions such as inductivism, still intends to manufacture human-type cognition on essentially behaviourist lines.
The only thing that Deutsch gets wrong in the above is the idea that main stream psych has gotten rid of its inductive bias. If only!
The piece is a challenge. I am not really fond of the way it is written. However, the basic point it makes is on the mark. There are serious limits to inductivism and the assumption that we are on the cusp of “solving” the problem is deserving of serious criticism.
So three easy pieces to keep you busy. Have fun.