MIT News (
here)
mentions a paper that recently appeared in
Frontiers
in Psychology (
here)
by Vitor Norbrega and Shigeru Miyagawa (N&M). The paper is an Evolang
effort that argues for a rapid (rather than a gradual) emergence of FL. The
blessed event was “triggered” by the emergence of Merge which allowed for the
“integration” of two “pre-adapted systems,” one relating to outward expression
(think AP) and one related to referential meaning (think CI). N&M calls the
first the E-system and the second the L-system. The main point of the paper is
that the L-system does not correspond to anything like a word. Why? Because
words found in Gs are themselves hierarchically structured objects, with
structures very like the kind we find in phrases (a DMish perspective). The
paper is interesting and worth looking at, though I have more than a few
quibbles with some of the central claims. Here are some comments.
N&M has two aims: first to rebut gradualist claims
concerning the evolution of FL. The second is to provide a story for the rapid
emergence of the faculty. I personally found the criticisms more compelling
than the positive proposal. Here’s why.
The idea that FL emerged gradually generally rests on the
idea that FL builds on more primitive systems that went from 1-word to 2-word
to arbitrarily large n-word sequences.
My problem with these kinds of stories has always been how we get from 2
to arbitrarily large n. As Chomsky has noted, “go on indefinitely” does not
obviously arise from “go to some fixed n.” The recursive trick that Merge embodies
does not conceptually require priming by finite instances to get it going. Why?
Because there is no valid inference from “I can do X once, twice” to “I can do
X indefinitely many times.” True, to get to ‘indefinitely many X’ might
casually (if not conceptually)
require seguing via finite instances of
X, but if it does, nobody has explained how it does.
Brute facts causing other brute facts does not an explanation make.
Let me put this another way: Perhaps as a matter of historical
fact our ancestors did go through a protolanguage to get to FL. However, it has
never been explained
how going
through such a stage was/is
required
to get to the recursive FL of the kind we have. The gradualist idea seems to be
that first we tried 1-word sequences then 2-word and that this prompted the
idea to go to 3, 4, n-word sequences for arbitrary n. How exactly this is
supposed to have happened absent already having the idea that “going on
indefinitely” was ok has never been explained (at least to me). As this is
taken to be a defining characteristic of FL, failing to show the link between the
finite stages and the unbounded one (a link that I believe is conceptually
impossible to show, btw) leaves the causal relevance of the earlier finite
stages (should they even exist) entirely opaque (if not worse).
So, the argument that recursion “gradually”
emerged is not merely wrong, IMO, it is barely coherent, at least if one’s
interest is in explaining how unbounded hierarchical recursion arose in the
species.
N&M hints at a second account that, IMO, is not as
conceptually handicapped as the one above. Here it is: One might imagine a
system in place in our ancestors capable of generating arbitrarily big “flat”
structures. Such structures would be different from our FL in not being hierarchical,
and the same in being unbounded. These procedures, then, could generate
arbitrarily “long” structures (i.e. the flat structures could be indefinitely
long (think beads on a string) but have 0-depth).
Now we can ask a question: how can one get
from the generative procedures that deliver arbitrarily long strings to our
generative procedures which deliver structures that are both long and deep? I
confess to having been very attracted to this conception of Darwin’s Problem
(DP). DP so understood asks for the secret sauce required to go from “flat”
n-membered sets (or sequences for arbitrary n) to the kind of arbitrarily
deeply hierarchically structured sets (or graphs or whatever) we find in Gs
produced by FL. I have a dog in this fight (see
here),
though I am not that wedded to the answer I gave (in terms of labeling being
the novelty that precipitated change). This version of the problem finesses the
question of where recursion came from (after all, it
assumes that we have a procedure to generate arbitrarily long flat
structures) and substitutes the question where did
hierarchical recursion come from. At any rate, the two strike me as
different, the second not suffering from the conceptual hurdle besetting the
first.
N&M provides more detailed arguments against several current
proposals for a gradualist conception for the evolution of FL. Many of these
seem to take words as fossils of the earlier evolutionary stages. N&M
argues that words cannot be the missing link that gradualists have hoped for.
The discussion is squarely based on Distributed Morphology reasoning and
observations. I found the points N&M makes very much to the point. However,
given the technical requirements needed to follow the details, I fear that
tyros (i.e. the natural readership of Frontiers)
will remain unconvinced. This said, the points seem dead on target.
This brings us to the second aim of the paper, and here I
confess to having a hard time following the logic. The idea seems to be that Merge
when added to the E systems we find in bird song and the L system we find in
vervets gets us the kinds of generative systems we find in G products of FL This is a version of the classical Minimalist
answer to DP favored by Chomsky. I say “sort of” as Chomsky, at least lately,
has been making a big deal of the claim that the mapping to E systems is a late
accretion and the real action is in the mapping to thought. I am not sure that
N&M disagrees with this (the paper doesn’t really discuss this point) as I
am not sure how the L-system and Chomsky’s CI interface relate to one another.
The L-system seems closer to concepts than full-blown propositional
representations, but I could be wrong here.
At any rate, this seems to be the N&M view.
Here’s my problem; in fact a few. First, this seems to
ignore the various observations that whatever our L-atoms are they seem
different in kind from what we find in animal communication systems. The fact
seems to be that vervet calls are far more “referential” than human “words”
are. Ours are pretty loosely tied to whatever humans may use words to refer to.
Chomsky has discussed these differences at length (see
here for a recent critique of “referentialism”) and if he
is in any way correct it suggests that vervet calls are
not a very good proxy for what our linguistic atoms do as the two
have very different properties. N&M might agree with this, distinguishing
roots from words and saying that our words have the Chomsky properties but our
concepts are vervetish. But how turning roots into words manages this remains,
so far as I can see, a mystery. Chomsky notes that the question of where the
properties of our lexical items comes from is at present completely mysterious.
But the bottom line, as Chomsky sees it (and I agree with him here), is that “
[t]he minimal meaning-bearing
elements of human languages – word-like, but not words -- are radically
different from anything known in animal communication systems.” And if
this is right, then it is not clear to me that Merge alone is sufficient to
explain what our language manages to do, at least on the lexical side. There is
something “special” about lexicalization that we really don’t yet understand
and it does not seem to be reducible to Merge and it does not seem to really
resemble the kinds of animal calls that N&M invokes. In sum, if Merge is
the secret sauce, then it did more than link to a pre-existing L-system of the
kind we find in vervet calls. It radically changed their basic character. How
Merge might have done this is a mystery (at least to me (and, I believe,
Chomsky)).
Again, N&M might agree, for the story it tells does not
rely exclusively on Merge to bridge the gap. The other ingredient involves
checking grammatical features. By
“grammatical” I mean that these features are not reducible to the features of
the E or L systems. Merge’s main grammatical contribution is to allow these
grammatical features to talk to one another (to allow valuation to apply). As
roots don’t have such features, merging roots would not deliver the kinds of
structures that our Gs do as roots do not have the wherewithal to deliver “combinatorial
systems.” So it seems that in addition
to Merge, we need grammatical features to deliver what we have.
The obvious question is where these syntactic features come
from? More pointedly, Merge for N&M
seems to be combinatorically idle absent these features. So Merge as such is
not sufficient to explain Gish generative procedures. Thus, the real secret
sauce is not Merge but these features and the valuation procedures that they
underwrite. If this is correct, the deep Evolang question concerns the genesis
of these features, not the operation instructing how to put grammatical objects
together given their feature structures. Or, put another way: once you have the
features how to put them together seems pretty straightforward: put them together
as the features instruct (think combinatorial grammar here or type theory).
Darwin’s Problem on this conception reduces to explaining how these syntactic features got a mental toehold.
Merge plays a secondary role, or so it seems to me.
To be honest, the above problem is a problem for every
Minimalist story addressing DP. The Gs we are playing with in most contemporary
work have two separate interacting components: (i) Merge serves to build
hierarchy, (ii) AGREE in Probe-Goal configurations check/value features. AGREE
operations, to my knowledge, are not generally reducible to Merge (in
particular I-merge). Indeed trying to unify them, as in Chomsky’s early
minimalist musings, has (IMO, sadly) fallen out of fashion.
But if they are not unified and most/many non-local dependencies are the
province of AGREE rather than I-merge, then Merge
alone is not sufficient to explain the emergence of Gs with the characteristic
dependencies ours embody. We also need a story about the etiology of the long
distance AGREE operation and a story about the genesis of the syntactic
features they truck in.
To date, I know of no story addressing this, not even very speculative ones. We
could really use some good ideas here (or, as in note 3, begin to rethink the
centrality of Probe/Goal Agree).
I don’t want to come off sounding overly negative. N&M,
unlike many evolangers know a lot about FL. Their critique of gradualist
stories seems to be very well aimed. However, precisely because the authors
know so much about FL while trying to give a responsible positive outline of an
answer to DP the problem, the paper makes clear the outstanding problems that
providing an adequate explanation sketch
faces. For this alone, N&M is worth reading.
So what’s the takeaway message here? I think we know what a
solution to DP in the domain of language should involve. It should provide an
account of how the generative procedures responsible for the G properties we
have discovered over the last 60 years arose in the species. The standard
Minimalist answer has been to focus on Merge and argue that adding it the
capacities of our non-linguistic ancestors suffices to give them our kinds of
grammatical powers. Now, there is no doubting that Merge does work wonders.
However, if current theoretical thinking is on the right track, then Merge
alone is insufficient to account for the various non-local dependencies that we
find in Gs. Thus, Merge alone does not deliver what we need to fully explain
the origins of our FL (i.e. it leaves out a large variety of agreement
phenomena).
In this sense, either we need some
ideas
about where AGREE comes from, or we need some work showing how to accomodate
the phenomena that AGREE does via I-merge. Either way, the story that ties the
evolutionary origins of
our FL to the
emergence of a single novel Merge operation is, at best, incomplete.
Here from Edward St Aubyn in
At Last: the
final Patrick Melrose Novel:
“Ok, so who created
infinite regress.” That’s the right question.