A while ago I mentioned work done suggesting that Gallistel's conjecture that cognitive computation does not require neural nets is correct. The work discussed "learning" in single cell slime molds and plants. At any rate, this stuff i going mainstream in that Quanta brings this research together in this review (republished in Wired).
The piece focuses on the controversy of whether this can actually be "primitive cognition" noting that for many cognition is only something that brains can do (by brains kogneuro types mean ensembles of neurons). The fear is that this kind of research amounts to "'devaluing' of the specialness of the brain" (12). Others comfort these kogneuro fears by claiming that the "debate is arguably not a war about science, but about words" (13).
Both claims are wrongheaded. These studies are direct challenges to the standard cogneuro paradigm that brain computation is fundamentally inter-neuronal. This is what the Gallistel-King conjecture challenges. The work on slime molds and plants indicates that what fits the behavioral definitions of learning exist in organisms without the requisite neural nets. The conclusion is that neural nets are not necessary for learning. This surely points to the possibility that the standard picture in cog-neuro concerning the centrality of neural nets to cognition needs a fundamental rethink. In fact, it would be biologically amazing if intra-neuronal/cellular cognitive computation was possible and extant in lower organisms but higher organisms didn't use this computational power at all.
Read the review. The content is not news to FoLers. But the reactions to the work and the weird attempts to either discredit, downplay or reinterpret it is fun to look at. The significant thing, IMO, is that this stuff is becoming more and more mainstream. I think we might be on the edge of a big change of mind.
Monday, July 16, 2018
Here’s another note on the contemporary ubiquitous desire (especially among scientists and “experts”) to demarcate science (and with it “expertise”) from everything else. You know my take: it cannot be done. We currently have no (interesting and principled)way to demarcate scientific inquiry from other kinds and there is little reason to believe that a (non-trivial bright) line will be discovered anytime in the near (or distant) future. FWIW, philosophers have been trying to find this border for a very long time (you can imagine there is a professional interest in being able to distinguish sense from nonsense), and the current wisdom in the philo community is that there is no there there. Here is a recentish short provocative piece on the topic that goes over the familiar ground (henceforth DS). As I read it, it provoked a few questions: Why should we care to demarcate the scientific from the non-scientific? Is this an urgent project for Science (note the big ‘S’) or for individual sciences? And if so, why? And if not, why does it appear to be sprouting everywhere one looks? Let’s expatiate.
First, we can ask the factual question: what if anything unifies what we collect under the term ‘Science’? The short answer is not much. DS goes over the usual suspects. To the degree that there is a scientific method, it is not refined enough to distinguish things that lie on what those desirous of the demarcation line would put on one side or the other. “Do your best in the circumstances” is probably all that one can milk out as general methodological advice. This is Feyerabend’s familiar (and correct) observation.
If not a single method, what of communal methods? This too is of little help. As DS notes (2):
The methods used to search for subatomic components of the universe have nothing at all in common with field geology methods…Nor is something as apparently obvious as a commitment to empiricism a part of every scientific field. Many areas of theory development, in disciplines as disparate as physics and economics, have little contact with actual facts, while other fields now considered outside of science, history and textual analysis, are inherently empirical.
So, there is no general method and few robust methods that cut across domains of inquiry to be of use.
Second question: does this matter? Not obviously. An inquiry requires some questions, puzzles, facts, and methods/technology. These are all generally justified in unison. Given a question prompted by an observation, yields a puzzle, that might be explained by deploying a particular method generating a more refined question, leading to a deeper puzzle, …. Of course, one can start someplace else. A puzzle prompts an observation that clashes with an inchoate “theory” that suggests other facts, that enforce/dispel the puzzle etc…. Or an observation suggests a puzzle that provokes an inquiry that leads to a hypothesis that… All of this can be locally monitored and justification can and does take account of the rich circumstantial detail. Engaging in such inquiry requires making the rules up as you go along, including establishing the requisite standards for the clarity of the questions at hand, deciding what counts as a good explanation relative to these questions, an adumbration of the relevant kinds of data, sample examples of what might resolve the puzzles, all leading to refinements of the initial questions and a restart of the process. The aim is not to avoid circularity (it cannot be done) but to progressively widen the circle so that it is not vicious. Anything goes that gets one going, though how one measures whether one is going and in what direction(s) one is moving in is also up for constant negotiation.
So, within a particular program all the issues relating to method become important for they end up defining the enterprise. There is nothing outside of this process to adjudicate the activity, or at least nothing principled. But this does not mean that within it there are not better and worse arguments or that dispute between conceptions must be irrational. One can, must, and does argue about the interest of the question being asked. One can, must and does argue about the methods being deployed to answer that question. One can, must and does argue about whether proposals actually address the question being asked. And one should do all of this most of the time. However, and this is the main point, none of this requires that we have rich general methodological principles or that what is good in domain A will be of any consequence or relevance in domain B. Of course, looking at other domains to see what they do can be useful and suggestive (IMO, physics envy is an excellent research attitude), but so can banging your head against the wall while reciting the Lord’s Prayer.
Moreover, none of this local wrangling will be useful in evaluating what counts as Science. If justification is local then demarcating the good from the bad in a general manner that applies across domains is likely to be question begging. As any academic knows, all fields have their methods and questions. If these are the measure of Science, then everything is Science. Christian (and Political (and dare I say, Language)) Science included.
So, there is no general Scientific Method and, luckily, as regards individual inquiries it does not matter. So why the endless quest among non-philosophers? Why is it important to demarcate where science ends and non-science begins. As DS notes this is a particularly hot issue for scientists (and “technology and policy oriented intellectuals” ).
I can attest to this worry. The whole obsession with STEM and spreading the STEM gospel is testament to this. I get daily appeals from STEM candidates running for congress. There is even an organization that supports getting STEMers elected (314 Action). The idea seems to be that being STEM gives one a leg up on rationality and political insight. In fact, the presupposition seems to be that having STEM endows special authority on those that have it. And where does the authority come from? Well, STEM implies scientific and this implies having expertise of a kind generally applicable to political matters. So demarcating science from non-science is there to separate “those who are granted legitimacy to make claims about what is true in the world from the rest us…” (2). If this is the goal, then the need for global standards becomes apparent and the demarcation problem becomes urgent. Why? Because only then can science be used to protect the enlightened from the unwashed by endowing some with authority and removing it from others. And this needs an objective basis (or at least a perceived objective basis).
And not only for those on the receiving end. It is critical that those at the receiving end of authoritative pronouncements believe that these are legit. Grounding them in Science makes them legit. Hence being scientific is critical. Moreover, those that wield authority must also believe that they are doing so legitimately to mitigate cognitive dissonance. This is an important line, and the harder it is to draw the more a blanket justification of some views over others teeters.
Note that none of this is intended to say that all reasons are on a par without being able to demarcate the scientific from all else. Even without a demarcation, there is excellent reason to believe that the planet is getting warmer due to human activity, that evolution operates, that austerity policies during depressed economic times is self-defeating, that FL/UG exists that generates hierarchical Gs exists and that humans have it, etc. These conclusions are not hard to defend. But they are not defended by noting that they are the products of scientific inquiry, but by noting the evidence and the theory for them. That’s what does the work and claims backed by little evidence or theory are of little value regardless of the methods used to generate them.
Nor does any of this mean that being in a position to adjudicate proposals might not require quite a bit of technical expertise. It might and often does. But, again it is not because the technical expertise is what makes something scientific but because some expertise is grounded in real questions addressed by good theories backed by good data. Technical wizardry can be an indication of cargo cultism rather than insight, as anyone in any mildly technical domain can attest.
So, onereason for the urgency of the demarcation issue today is the challenge to “authority” that is in the air and the hope that cloaking it in “science” will serve to stifle it by lending it legitimacy.
There is another reason as well. Many domains of inquiry are suffering from internal problems. By this I mean problems internal to the domains of inquiry themselves. There is the “replication” crisis in many sciences that is beginning to undermine their status as “sciences” in the public mind (and this has a spillover effect into the public status of (big ‘S’) Science more generally). There is also the fact that some domains seem to have hit an impasse despite their overwhelming success. Fundamental physics seems to be in this position nowadays if the public toing and froing is any indication (see herefor short version of the angst regarding work in this area). So the legitimacy issue is hitting Science form both ends. The replication crises stems from a purported problem with the data. On the other end, fundamental physics is suffering from an unhealthy obsession with beauty (aesthetic benchmarks concerning “simplicity,” “naturalness” and “elegance” (see here)). Both critiques point to an uncomfortable conclusion for many: science as currently practiced is getting away from the “facts” and the results should be treated very skeptically (and what is wrong with a good dose of skepticism anyhow?). But, IMO, this is the wrong conclusion.
The right one is that we sometimes run into walls where our methods fail us. Or, when we really don’t know what’s going on, then nothing much helps except a good idea that gets us going again. And if a problem is really hard, then good ideas might be very hard to come by. Big surprise! But this idea, it appears, is tough to swallow. Why?
There is a tacit assumption among scientists that there is a scientific way of doing things and if we just do things in this way then insight must follow. Scientists are particularly prone to this point of view. Not only is it self-flattering (thought it is, it really is) but it is also is very hopeful. Given this view, all setbacks are temporary. All mistakes will self-correct. All obstacles will eventually be overcome and all questions will receive deep and insightful answers. No domain is impenetrable. All problems are solvable. There are no limits to knowledge. Ignorance is temporary, even if hard to dispel. This is a very hopeful message as it encourages the idea that there is always something that can be done that if done right will get us moving forward.
This moral optimism is the decent side of the belief in a scientific method. And this optimism is what these current failures within the sciences challenges. Add to this (i) that nobody likes pessimists (they are such downers), and (ii) that it is never possible to prove that more hard work, more careful experiments and stats etc. won’t get us moving again and the allure and psychic rewards of the hopeful attitude win the day. So, given the positive spin we place on optimism (“Morning in America”) and the negative one we place on pessimism, there is little surprise that when things get tough there is a desire to justify, which in this case means demarcate. This allows us to segregate the rot and justify optimism for the newly refurbished (rot removed) enterprises.
There is, as always, one further ingredient: Money!! Today money is tight. When money is tight you look to defend your share. Science (big ‘S’ again) is a weapon in the funding wars. Sure, lit and history and philosophy and whatever are fluffy and only valuable when we are flush, but Science, well that needs no defense. Of course, this only works if we can tell what is Science and what isn’t, and hence the obsession on demarcation by scientists.
So what makes the demarcation issue hot again? The trifecta of the perceived decline in the authority of experts, the current failure in some domains of the traditional methods and declining support together provide more than enough reason to motivate the hunt for a methodological grail.
One of the consequences of Rish conceptions of inquiry is the idea that it comes with implied natural limits.Scientific “success” is always a bit of a miracle (for Descartes, only God guaranteed it (Darwin has often been invoked to similar ends, but his powers are decidedly less expansive)). For people like me, this makes cherishing every apparent explanatory breakthrough deserving of the utmost respect. In practical terms, this leads me to firmly hold onto possible explanations even when confronted with a lot of (apparent) counter evidence. Others dump potential explanations (i.e. theories) more quickly. This is partly a matter of scientific taste. However, there are times when tried and true methods fail. Then doing useful work that meets accepted criteria becomes harder. This should not come as a surprise. It’s the flip side of being able to gain non-trivial understandings of anything at all. It’s what any self conscious Rist who does not have faith in divine harmony would expect.
There are many uninteresting ways: what the NSF and NIH fund, who the NYT designates an “expert” worth quoting, what Andrew Gelman take to be scientific, etc. It is not excessive, IMO, to observe, that currently, what is scientific sits in the same category as what is prurient: it is at bestknown when seen. And not even then.
The main utility of looking around is to prevent being bullied by methodological sadists and being tripped up by those insisting that asking a question on some particular way or pursuing a program with some particular emphasis falls outside the “scientific.” The best answer to this is to appreciate that there is no obvious way to fall outside the relevant pale as there is no principled border. However, the second bestway is to observe you're your proposals comport with those utilized by other more obviously successful inquiries. The principle goal of physics envy is defensive. It cuts short all sorts of nonsense (e.g. falsifiability, anti-theory hogwash, Eish concerns with idealization, etc.).
Though this does not imply that we can know what these limits are. Chomsky has discussed this a lot (scope and limits stuff). It is often derogatorily labeled ‘mysterianism.’ As Chomsky has repeatedly noted, the idea that there are limits to what we can understand is the flip side of noting that we can understand some things deeply.
Monday, July 9, 2018
One thing Empiricism (E) got right is that there are no foundational assumptions of general scientific utility that are empirically inviolable. Rationalists (R) once thought otherwise, thinking that the basics of the mechanical philosophy (matter is geometrical and all forces are contact forces) as well as an innate appreciation of some of God’s central features could undergird permanent foundations for investigation of the physical world. One of Newton’s big debates with the Cartesians revolved around this point and he argued (as convincingly examined hereby Andrew Janiak) that we have no privileged access to such theoretical starting points. Rather, even our most basic assumptions are subject to empiricalevaluation and can be overturned.
This is now the common view. The Rs were wrong, the Es right. Or at least someof them were. Descartes and his crew were metaphysical foundationalists in that they took the contours of reality to be delivered via clear and distinct innateideas. As these ideas reflected what had to be the casethey served to found physical inquiry in a mechanical view of the world. So for Rs innate ideas were critical in the sciences in that they were metaphysically foundational. Newton denied this. More exactly, he denied that there were empirically unassailable foundations for natural philosophy. And he did this by arguing that even foundational assumptions (e.g. the mechanical philosophy) were subject to experimental evaluation. This he proceeded to do in the Principia Andrew Janiakwhere he argued that the mechanical philosophers had it all wrong, and that mass and gravity were inconsistent with mechanism and so false. Great argument.
As Janiak tells the story, three features of Newton’s story really made his conclusions convincing. First, he provided a mathematical formulation of the force laws and the law and gravitation. Second, he showed how these unify terrestrial and celestial mechanics. That they wereunified was a staple of Cartesian mechanical thinking. However, they could not show that this conviction was scientifically justifiable. Newton unified the two domains via gravity (plus he threw in the tides as a bonus) thus achieving what the mechanical philosophers wanted by using a force they despised. Third, Newton provided a principled account for Galileo’s observation that acceleration was independent of the shape/size/density of the accelerating objects. Why this should be so was a real puzzle, and an acknowledged one. The link between gravitation and mass that Newton forged had this independence fall out as a trivial consequence. The second and third achievements were substantive within the framework of Cartesian physics (they solved problems that Cartesians recognized as worthy of explanation) but they were inconsistent with Cartesian mechanical philosophy (because they were based on Newton’s conceptions of mass and gravity), as was widely understood. This is why Newton’s physical interpretation of his formal work was resisted, though everyone agreed that the math was wondrous. The problem was not the math, but what Newton took the math to mean.
The Janiak book goes into lots of detail regarding Newton’s argumentative approach to arguing against Cartesian orthodoxy. It is a really great read. The principle form of the argument goes as follows: it is possible to know thatsomething is true without knowing exactly howit can be.This involved unpacking Newton’s famous dictum that the does not feign hypotheses. This Janiak notes does not mean that Newton did not advance theories. Clearly he did. Rather Newton meant that he believed that he could both say that he knew something to be the case and that he did not know exactly how what he knew to the case could be the case. Sound familiar? This is a critical distinction and Newton’s point is that unless one carefully distinguishes the questions one is asking it is very hard to evaluate the quality of the answers. Some data for some questions can be completely compelling. The same data for other questions hardly begin to scratch the surface.
Newton showed that it is both possible to know thatGravity is real and not know exactly whyit has the properties we know it to have or even to exactly know whatit is (a relation or a quality). What Newton did know was that one could believe thatgravity was real without thinking that it was an inherent property of bodies that acted at a distance. He insisted that local action was consistent with gravity (there was no contradiction between the two) even though he did not know how it could be (he had no constructive theory of local action that included gravity).
I would have put things somewhat differently: Newton knew that gravity existed and what its signature features were and some central cases of its operations. He had an effective theory. But he did not believe that he had a fundamental account of gravity, though he knew that there could not be a mechanical explanation of these gravitational effects. Moreover, he knew that nothing known at the time implied that it was inconsistent with local action of some other yet to be determinedkind. So, he knew a lot but not everything, and that was good enough for him.
Note what convinced contemporaries: unification, and explanation of outstanding generalizations. So too today: we want to unify distinctive domains in syntax (MP requires this) and want to explain why the generalizations we appear to have discovered hold. This is the key to moving forward, and imitating Newton is not the worst path forward.
So, Eism did come out right on this point, but most Es did not. Newton was somewhat of an outlier on these foundational matters. He rightly understood that Cartesian metaphysics and its innate ideas window into reality would not wash. But he did not appear to embrace an Eist epistemology (or at least theory of mind). Unlike many, he did not endorse the view that there are no ideas in the intellect that are not first in the senses. Thus, his beef with Rists was not their nativism but their supposition that being innate conferred some kind of privileged metaphysical status. The way Newton short circuited the metaphysical conclusions was by showing that they did not hold up to experimental and theoretical scrutiny. He did not argue that the ideas could not be useful because they could not exist (i.e. there are no innate ideas) or that the only decent ideas were those based in perception. This was fortunate for curiously the Eists did not really grasp N’s basic take home message: ideaswherever they come fromrequire scientific validation through experiment and theory. And every part of every theory is in principle up for grabs, whatever its psychological source. Eists have tended not to understand that this was Newton’s message and have concluded that because Newton showed that Rist foundationalism failed that it failed because it countenanced innate ideas.
But this was not the problem. The problem came with the added assumption that innate ideas in virtue of being innate are empirically unassailable. Curiously, Eists came up with their own form of foundationalism based on their view that the only good idea are the ones based in the senses. This idea didn’t turn out very well either and gave rise to its own forms of foundationalism, also false by Newton’s standards.
So, Rs were wrong about the relation of ideas to truth even though they were largely right about the psychology. Newton was right that there are no useful a priori foundations for the sciences (i.e. foundations that are not ultimately empirically justified). Es were right in that they believed that Newton did show that Rism was wrong to the degree that it was foundationalist. Where Es got lost is in rejecting the Rish psychology becauseit failed to provide metaphysical foundations. Eism ended up looking for epistemological foundations that would demarcate legit (i.e. scientific thinking) from non legit thinking (everything else). Eists pursued the idea that if one grounded ones ideas in the senses will guarantee good foundations. Newton would not have approved. The right conclusion is that that there are no epistemological shortcuts to good science.
These are two questions that, though related, should not be confused. I have argued that they often have been within linguistics too. Thathumans have an FL that undleries their unique capacity to acquire language is almost a tautology, IMO. How FL allows humans to do this is a substantive problem that we have just started to crack.
Monday, July 2, 2018
The obit for Koko in a recent post (here), though intended somewhat tongue in cheek, garnered three interesting comments. They pointed to a couple of recent papers that (in some sense that I will return to) extended the intellectual project of which Koko was an important cog when I was in grad school. It was the first EvoLang project that I was exposed to. The aim was to establish the linguistic facility of non-human apes. Which is what got me into that gorilla suit.
In honor of Chomsky’s birthday, Elan Dresher, Amy Weinberg and yours truly decided to settle the issue once and for all by having Koko publically debate Noam on the topic of whether or not Koko was linguistically competent. IMO, the debate was a draw (though Noam (the real Noam, not Dresher-Noam) begged to differ). However, whatever the outcome, for a while after the debate I was considered somewhat of an expert on the topic as many concluded that having been the one in the suit I had an ape’s eye view of the issue and so could add a heretofore unavailable slant on the topic. I milked this for all it was worth (as you would have too, I am sure).
So what was the debate about? The question was whether human language is continuous with similar linguistic capacities in our ape cousins. Eric Lenneberg dubbed the claim that human language is a quantitative extension of qualitatively analogous capacities in our ape cousins the Continuity Thesis (CT). Given CT, our evidently greater human linguistic capacities are just diminished ones of our ape cousins when turbo charged with greater brainpower. Human linguistic competence, in other words, is just ape linguistic competence goosed by a higher IQ. Given CT, we humans are not different in kind. We just have bigger brains (fatter rounder frontal lobes), more under the cranial hood, and our superior verbal capacities just hydroplane on that increased general intelligence.
Koko (Penny Patterson’s verbal gladiator) was not the only warrior in the CT campaign. There were many apes recruited to the good fight (see here). The Gardners had Washoe, Sue Rambaugh had Kanzi, Herb Terrace had Nym. There were others too. A lot of effort went into getting these apes to use signs (verbal, manual, computerized) and demonstrate the rudiments of linguistic competence. The aim was to show that they would/could acquire (either naturally, or more standardly after some extensive training) the capacity to articulate novel semantically composed messages using a rudimentary syntax similar to what we find in human natural language. Once this basic syntax was found, we could solve the EvoLang problem by attributing human verbal facility to this rudimentary syntax proportionatly enhanced by our bigger brains into the wondrous linguistically creative syntactic engine currently found in humans. Not surprisingly, it was believed that were it possible to show that, linguistically speaking, these other apes were attenuated versions of us, this would demonstrate that Chomsky was wrong and that there was nothing all that special about human linguistic capacity. If CT was correct then the difference between them and us is analogous to the difference between a two cylinder Citroen Deux Cheveaux and a Mercedes AMG 12 cylinder. The principles were the same even if the latter dwarfed the horsepower of the former. CT, in sum, obviated the need for (and indeed would be considerable evidence against) a dedicated linguistic mental module.
This really was all the rage for quite a while (hence my several months of apish celebrity). And I would bet (given the widespread coverage of Koko’s demise) that you could still get interviewed on NPR or published in the New Yorker with a groundbreaking (yet heartwarming) story about a talking ape (apes play well, though parrots also get good press). In academic circles however, the story died. The research was largely shown to be weak in the extreme (“crappy” and “shoddy” are words that come to mind). Herb Terrace’s (non shoddy, not crappy) work with Nym Chimpsky (Mike Seidenberg and Laura Petito were the unfortunate grad students that did the hard slogging) more or less put to rest the idea that apes had any significant syntax at all, and showed that it is nothing like what we see in even the average three year old toddler.
Charles Yang has a really nice discussion of the contrast in capacities between Nym and your average toddler in a recent paper on Zipf’s law as it applies to the linguistic productions of toddlers vs those of Nym (here). By the global measure which that law affords Charles is able to show that:
Under the Zipfian light, however, the apparent continuity between chimps and children proves to be an illusion. Children have language; chimps do not. Young children spontaneously acquire rules within a short period of time; chimpanzees only show patterns of imitation after years of extensive training.
Moreover, these “patterns of imitation” do not show the hallmarks of Zipfian diversity that we would expect to see if they were products of grammatical processes. As Charles bluntly puts it: Nym “was memorizing, not learning a rule.”
The ineluctable conclusion is that teaching apes to talk came nowhere close to establishing CT. It failed toe establish anything like CT. In short, there is zero evidence from this quarter to gainsay the obvious observation (obvious to anyone but a sophisticated psychologist that is) that nothing does language like humans do language, not even sorta, kinda. The gap is not merely wide it is incommensurable. The gap is qualitative, not quantitative and the central distinguishing difference is that we humans are gifted with a biologically sui generic syntactic capacity. We develop unique Gs on the basis of unique FL/UGs. Yay us!
Now, I admit that I thought that this line of discussion was dead and buried, at least in the academic community. But I should have known better. What the commentators noted in their comments to my previous obit for Koko is that it is baaaack, albeit in a new guise, this time sporting a new fancy formal look. The recent papers adverted to, one outlining the original research conducted by a consortium of authors including Stan Dehaene (SD&Co)(here) and a comment on the work explaining its significance by Tecumseh Fitch (TF) (here), resurrects a modern version of CT, but this time in more formidable looking garb. IMO, this time around, the discussion is of even less biolinguistic or evolang relevance. I’d like to talk about these papers now to show why I find them deeply disappointing.
So what does SD&Co do? It shows that Macaques (i.e. monkeys, not apes so further from humans that is really CT optimal) can be (extensively) trained (10,000-25,000 training trials) to produce supra-regular (manual) sequences that whose patterns go beyond the generative capacity of finite state automota. This is argued to be significant for it provides evidence that crossing the regular grammar boundary is not “unique to humans” (SD&Co, 1). This fact is taken to be relevant biolinguistically for it “indicates cognitive capabilities [in non humans, NH]…that approach the computational complexity level of human syntax” (TF, R695).
The experiments taught monkeys to perform sequential operations in the spatial-motor domain (touching figures in sequential patterns on a screen). These manual patterns go beyond those describable by regular grammars. It is argued that mastering this sequential capacity implies that monkeys have acquired supra-regular grammars able to generate these patterns (Phrase Structure Grammars (PSG)), heretofore thought to be “only available in humans” (SD&Co, 1). So, assuming that the experiments with the monkeys was well done (and I don’t have the expertise to deny this, nor do I believe it (nor do I think that it is relevant to do so)), then it appears that non humans can cross over the domain of regular grammars and acquire performance compatible with computational systems in (at least) the PSG part of the Chomsky Hierarchy. Let’s stipulate that this is so. Why should we care? TF provides the argument. Here it is in rough outline:
1. Only humans show an “unbounded” communicative “expressive power”
2. Shared traits are a “boon to biologists interested in language” to “test evolutionary hypotheses about adaptive function”
3. Syntax has “until now resisted the search for parallels in our animal brethren”
4. It has heretofore been thought that “grammars” are “beyond the capabilities of nonhuman animals”
5. SD&Co “show[s] that with adequate training, monkeys can break beyond this barrier”
6. In particular, the monkeys could learn rules beyond those in finite state (regular) grammars
7. So nonhumans have “supra-regular computational capacities”
8. And this “suggest[s] that the monkey’s brain possesses the kind of cognitive mechanisms required for human linguistic syntax…”
So there is it, the modern version of CT. Monkeys have what we have just a little less of it in that ours requires a handful of examples (5 according to SD&Co) to get it going while theirs needs on the order to 10,000-25,000. Deux Cheveaux vs Mercedes it is, once again.
So, is this argument any better or insightful than the earlier Ape versions of CT? Not as far as I can tell. Let me say why I don't think so.
Note that his version of CT is very much more modest than the earlier attempts. Let’s count the ways.
First, earlier versions looked to our immediateevo relations. Here, it’s not our immediate cousins whose capacities are investigated, but monkeys (and they are very, very, very distant relations).
Second, early CT were interested in demonstrating a productive syntax in service of semantic productivity. What’s wondrous about us is that the syntax is linked to (underlies?) semantic productivity. We are not looking at mere patterns or sequences. What is impressive in language is the fact that the relevant patterns can be semantically deployed to produce and understand an open ended number of distinct kinds of messages/thoughts. The SD&Co experiments have nothing to do with semantic productivity. It is just pattern generation that is at issue. There is no reason provided to think that the monkeys can or could use these patterns for compositional semantic ends. So, one key feature of oursyntax (the fact that it ties together meaning and articulation) is set to one side in these experiments.And this is a big retreat from the earlier CT work that correctly recognized that linguistic creativity is the phenomenon of interest.
Third, and related to the second, the kind of grammars the monkeys acquire have properties that our grammars never display. Human Gs don’t have mirror image rules. And the reason we think that such kinds of rules are absent is that human Gs have rules that don’t allow them. They are “structure dependent.” Our rules exploit the hierarchy of linguistic representations, and the syntax eschews their sequential properties. So it is unclear, at least to me, why being able to teach monkeys rules of the kind we never find in human Gs would tell us about the properties of human Gs that fail to pattern in accord with these rules. And it it cannot do this, it also cannot inform us as to how our kinds of Gs evolutionarily arose in the species.
This criticism is analogous to one raised against earlier CT work. It was regularly observed that even the most sophisticated language in nonhumans (the best actually being on dolphins by Herman and published in Cognition many, many, many years ago) made use of an ordinal encoding of “thematic” role to sequential position. The critters were taught a “grammar” to execute commands like “Bring A with B to C” with varying objects in the A,B,C positions. They were pretty good at the end, with the best able to string 5 positional roles and actions together. So, they mastered an ordinal template to roles and actions productively (though not recursively (see note 4)). But, and this was noted at the time and since, it was a sequential template without a hint of hierarchy. And this is completely unlike what we find in human Gs.
Interestingly, SD&Co notes that the same thing holds in its experiments (pp. 6-7).
Even after extensive training on length 4 sequences, behavioral analysis suggested that monkeys still relied on a simple ordinal memory encoding, whereas pre-schoolers spontaneously sued chunking and global geometric structure to compress the information. Thus, the human brain may possess additional computational devices, akin to a “language of thought,” to efficiently represent sequences using a compressed descriptor during inductive learning.
In other words humans display a sense of constituency (after 5 exposures) while highly trained monkeys never develop one (well, not even after 25,000 training examples). Maybe its because human Gs are built around the notion of a constituent, while monkey Gs never are. And as constituency is what allows human semantic productivity (it underlies our conceptions of compositionality) it really is quite an important difference between us and them. In fact, it has been what people like me who claim that human syntax is unique have pointed to forever. As such, discovering that monkeys fail to develop such a sense of hierarchy and constituency seems like a really big difference. In fact, it seems to recapitulate what the earlier CT investigations amply established. In fact, even CD&Co seem to suggest that it marks a qualitative difference, referring as it does to “additional computational devices,” albeit describing these as properties of a distinctive “language of thought” rather than what I would call “syntax.” Note, that if this is correct, then one reasonable way of understanding the SD&Co experiments is establishing that what monkeys do and what we do is qualitatively different and that looking for Gs in our ancestors is a mug’s game (the conclusion opposite to the one that TF draws). In other words, CT is wrong (again) and we should just stop assuming that general intelligence or general computational capacities will come to the evolang rescue.
Fourth, it is important to appreciate what a modest conclusion DG&Co’s paper licenses even if true. It shows that monkeys can manage to acquire capacities wrt to kinds of patterns, including mirror image ones, that are assumed to be relevantly similar to linguistically relevant ones. What makes the patterns linguistically interesting? Regular Gs cannot generate them. They require at least Gs in the PSG part of the Chomsky Hierarchy. The conclusion is that monkeys have computational capacities beyond the regular, and lie in at least the PSG precincts. But even if correct, this is a very weak conclusion. Why?
There are billion and billions of supra-regular rules/Gs (indeed there are even humongously many regular ones). Human Gs have zeroed in on one extremely small corner of this space. And it is true that unlessnon-humans can master supra-regular patterns they cannot have human Gish capacities because our Gs are super-regular. However even if they can master supra-regular Gs it does not follow that the kinds they can master are in any way similar to those that we deploy in natural our natural language facility. In other words, the claims in 7/8 above are wildly misleading. The fact that monkeys can have one kind of supra-regular capacity tells us nothing at all about whether they can ever have our kind of supra-regular capacity, the one underlying our linguistic facility. Assuming otherwise is simply a non-sequitur. In other words, TF’s claim that the monkey behavior “suggest[s] that the monkey’s brain possesses the kind of cognitive mechanisms required for human linguistic syntax” is entirely unfounded if this is taken to mean that they have computational mechanisms like those characteristic of our Gs.
Let me put this another way: GGers have provided pretty good descriptions of the kinds of properties human Gs have. The evolang question is how Gs with thesefeatures could have arisen in the species. Now, one feature of these Gs is their recursivity. But this is a very weak feature. Any G for a non-finite language in the Chomsky Hierarchy will have this feature. But virtually none of them will have the properties our Gs enjoy. What we want from a decent evolang story is an account of how recursive Gs like ours(ones that generate an unbounded number of hierarchically structured objects capable of supporting meaning and articulation, that are non-counting, that support displacement, etc. etc. etc.) came to be fixed in the species. That’s what we want to understand. There is virtually no reason to think that the SD&Co advances this question even one nano-nano-nano meter (how many nano meters to a jot?) for the rules they get the monkeys to acquire look nothing like the rules that characterize our Gs (recall, human Gs eschew mirror image rules, they link meaning with articulation, they show displacement etc.).
Truth be told, GGers (and here I mean Chomsky) might be a little responsible for anyone thinking simple recursion is the central issue. We are responsible for two different reasons.
The first sin is that many a minimalist talk talk starts by insisting that the key feature of human linguistic facility is recursion. However, it is not recursion per se that is of interest (or not only recursion) but the very specific kind of recursion that we find in human language. There are boundlessly many recursive systems (even regular languages have recursive Gs that can generate an unbounded number of outputs). Our specific recursive Gs have very distinctive properties and the relevant evolang question is not how recursion arose but how the specific hierarchical recursion we find in humans arose. And there is nothing in these papers that indicates that human linguistic facility is any less qualitatively distinctive than GGers have been arguing it is for the last 60 years. In fact, in some important ways, as I’ve noted, this recent foray into CT is several steps less interesting than the earlier failed work, as it fails to link syntactic recursion to semantic creativity, like earlier work tried to do (even if it largely failed).
The second failing can be laid at the feet of Syntactic Structures. It presents a neat little argument showing that natural language Gs cannot be modeled as finite state automata. This was worth noting at the time because Markov processes were taken to be useful models of human cognition/language.So Chomsky showed that this could not be true even looking at extremely superficial properties found in the sequential patterns of natural language (e.g. if…then…patterns). He then noted that PSGs could generate such patterns and argued that PSGs are also inadequate once one considers slightly more interesting generalizations found in natural language. The second argument, however, does not claim that PSGs cannot generate the relevant patterns, but do not do so well (e.g. they miss obvious generalizations, make Gs too complicated etc.). Some concluded that this was a weak argument against PSGs (I disagree), but what’s more important is the conclusion tacitly drawn was that being PSGish is what language is all about. There is a very strong whiff of this assumption in both papers. But this is wrong. Given Chomsky’s points, being (at least) PSGish is at most a necessary condition in that being Finite State Gs cannot even get off the ground. But we know that not all PSGs are the same and that natural language shows very distinctive PSG properties (e.g. headedness, binarity) so the necessary condition is a very weak feature and that having PSG capacities does not imply having those underlying human language. It’s a little like discovering that the secret to life is the prime factorization of an even number and concluding that I am closer to finding it because I know how to factor 6 into its primes. Knowing that it is even is knowing something, just not very much.
Let’s return to the main discussion again. Fifth, if the point is to demonstrate that animals have fancy computational capacities (e.g. ones that require memory more involved than the kind we find in Finite State Machines) then we already have tons of evidence that this is so. In fact, I am willing to bet that being able to identify the position of the sun even when obscured, using that position to locate a food source, calculating a direct route back home despite a circuitous route out, communicating the position of this food source to others by systematically dancing in the dark and by being able to understand this message and reverse the trajectory of flight all the while calibrating the reliability of these messages by comparing their contents with a mental map specifying whether the communicated position is one of the possible positions for the food source is quite a bit more computationally involved than mirror reversing a sequence. In fact, I would bet that one comes close to using full Turing computational resources for this kind of calculation (e.g. memory much more involved than a stack, and computations at least as fancy as mirror reversal). So, if the aim is to show that animals have very fancy computational capacities, we already know that they do (cashing behavior is similarly fancy, as is dead-reckoning etc.). Just take a look at some of the behaviors that Gallistel is fond of describing and you can put to rest the question of whether non-human animal cognition can be computationally very involved. It can be. We know this. But, as fancy as it can be, it is fancy in ways different from the ways human linguistic capacity is fancy and there is no reason to think that mastering one gives you a leg up on mastering the other (you try dancing in the dark to tell someone where to find a cheeseburger at 4 AM).
In sum, there is no reason to doubt that non-humans have fancy cognitive computational capacities. But there is also no way currently of going from theirvery fancy capacities to the ones that underlie our linguistic facility. And that, of course, is the problem of interest. If the aim of SD&Co is to demonstrate that non-humans can be computationally fancy shmancy, it was a waste of time.
Sixth, let me end with a much weaker concern. There is one more sense in which these kinds of experiments strike me as weak. What we believe about humans is that it is (partly) in virtue of having Gs that generate structures of certain sorts that humans have linguistic capacities of the kinds they have. So, for example, GGers argue that it is in virtue of having Merge that Gs allow movement and reconstruction that humans can creatively understand the interpretation of sentences like Which of his1books did every author insist you review. Without Gs of this sort, these behaviors could not be accounted for. Now, it is quite unclear to me if most of the animal literature on sequence pattern capacities in animals (can they master AnBnor mirror image sequences?) shows that they animals succeed in virtue of having PSGs that generate such sequences. Maybe they do it some other way, using systems more powerful than PSGs. Let me explain what I mean.
If asked to verify whether a sequence of As and Bs was an AnBnsequence, I personally would count the As and Bs and see if they matched up. This would allow me to do the task without using a PSG to generate the string that has n As followed by n Bs. In the first case I count. Not so the second. Now, I might be off here, but I do not see that the experiments generally reported (including the SD&Co one) show that the animals strut their cognitive stuff in virtueof mastering Gs with the right properties. Note, that PSGs can do what I do with counting without it. But another G could solve the problem with counting. So how do we know the animals don’t count (note that we are usually talking of very short sequences (3,4 unites long))? Or, more specifically, how do we know that they solve the problem by constructing/acquiring a PSG that they use to solve it? The fact that such a G could do this does not imply that this is how the monkeys actually do it. There are many ways to cognitively skin a problem. Using a non counting PSG is one specific way. There are others.
Ok, enough already. I am a fan of the work of both DeHaene and of Fitch. I have learned a lot by reading them. But I don’t see the value of this computational revival of CT. It really does nothing to advance the relevant evolang or biolinguistic issues so far as I can tell. More worrisome is that it is generally oversold and, hence, misleading. It is time to bury the Continuity Thesis in all of its manifestations. Human language really is different. We even have some idea how. For evolang to have any interest, it needs to start asking how itcould have evolved, how something with its distinctive properties could have arisen. The current crop of CT inspired explorations of the Chomsky Hierarchy don’t do this, though they don’t do this in an obscure way that covers up their general irrelevance. I prefer the older ape based ways of being wrong. They at least made for some fun theater. Thx Koko.
Charles’ most interesting observation in this wonderful little paper (read it!) is the following:
It is amazing how far anecdotes can take you when you are pushing a line whose truth is strongly desired. Herb Terrace and company are to be commended for studying the issue scientifically rather than a la NPR.
To this day, Nim has provided the only public database of signs from animal language studies. (By contrast, the ability of Koko, the famous talking gorilla who occasionally holds online chats, comes exclusively from her trainer’s interpretation and YouTube clips.)
There was always another thing odd about CT work. Say we could show that apes had linguistic capacities (nearly) identicalto ours. That would allow us to explain human linguistic facility by saying that humans inherited it from a common ape/human ancestors.
So given this assumption, how we became facile is easily explained. But this would not really explain the question we are most interested in: how linguistic capacity in general arouse. It would only explain why we have it given our ape ancestors had it. But as the really interesting question is how language arose, not how it arose in us, the very same problem pops up now as regards our ape ancestors and monkeys, unless we assume that monkeys too have/had more or less the same linguistic facility as our ape ancestors. And so on through the clades. At some point, heading backwards through our ancestry we would come to a place where animal X had language while related animal Y did not. And when we got there, the very same problem we are trying to answer in the human/ape case would arise, and arise in the very same form. As such it is unclear what kind of progress we make by attributing to our ancestors (pale) versions of our capacities unless we are willing to go all the way and attribute paler and paler versions of this linguistic capacity all the way down (or out) the evo tree/bush.
This brings us to the real implicit assumption behind the earlier CT enterprise: everything talks. The differences we see are never qualitative ones. For unless we assume this pushing the problem back a clade or two really does nothing to solve the conceptual puzzle of interest. So the inchoate assumption has always been that language is just a product of general intelligence, intelligence is common across animals (life?) and one only sees language emerge robustly when intelligence crosses a certain quantitative threshold. The only reason that dogs don’t talk is that they are too dumb. But don’t say this around a favored pet because they might get pissed.
Last point: this anti-modularity conception of cognition is just part of the general Eish conception of minds. Gallistel and Matzel have a nice discussion of how Associationism favors general intelligence approaches to the mind because modularity requires specific bespoke architectures.
Note that earlier CT work aimed at showing that this was possible in animals. Like I note below, Dolphins didn’t do badly wrt simple sequence/thematic meaning linkages. So they showed a kind of compositionality in their acquired G.
It was also not productive, capping out at 5 (at most). So there was no recursion here. What we would like to see is recursion coupled to semantic composition via a hierarchical syntax. Neither the dophins nor the macaques provide this.
My recollection is that heavyweights like Suppes mooted as much.
The work on ‘Most’ (here) revolves around just this theme: truth conditions can be determined by various different generative procedures. The relevant GG question is the nature of the generative procedure. So too here. Are the monkeys solving their problems by using an acquired G that generates the relevant strings or are they doing it in some other way (e.g. by counting or ordinally ordering the inputs?).