My mother always told me that you should be careful what you
wish for because you just might get it. In fact, I’ve discovered that her
advice was far too weak: you should be careful what you idly speculate about as
it may come to pass. As readers know, my last post (
here)
questioned the value added of the review process based on recent research
noting the absence of evidence that reviewing serves its purported primary
function of promoting quality and filtering out the intellectually less
deserving. Well, no sooner did I write this than I received proof positive that
our beloved LSA, has implemented a
no review policy for
Language’s new online journal
Perspectives. Before I review the
evidence for this claim, let me say that though I am delighted that my
ramblings have so much influence and can so quickly change settled policy, I am
somewhat surprised at the speed with which the editors at
Language have adopted my inchoate maunderings. I would have hoped
that we might make haste slowly by first trying to make the review process progressively
less cumbersome before adopting more exciting policies. I did not anticipate
that the editors at
Language would be
so impressed with my speculations that they would immediately throw all caution
aside and allow anything at all, no matter how slipshod and ignorant, to appear
under its imprimatur. It’s all a bit dizzying, really, and unnerving (What
power! It’s intoxicating!). But why do things by halves, right?
Language has chosen to try out a bold
policy, one that will allow us to see whether the review process has any
utility at all.
Many of you will doubt that I am reporting the
aforementioned editorial policy correctly. After all, how likely is it that
anything I say could have such immediate impact? In fact, how likely is that that the LSA and
the editors of
Language and its
online derivatives even read FoL? Not
likely, I am sorry to admit. However,
unbelievable as it may sound, IT IS TRUE, and my evidence for this is the
planned publication of the target article by Ambridge, Pine and Lieven (APL) (“Child
language: why universal grammar doesn’t help”
here).
This paper is without any redeeming intellectual value and I can think of only
two explanations for how it got accepted for publication: (i) the radical
change in review policy noted above and (ii) the desire to follow the Royal
Society down the path of parody (see
here).
I have eliminated (ii) because unlike the Royal Society’s effort, APL is
not even slightly funny haha (well maybe
as slapstick, I’ll let you decide). So that leaves (i).
How bad is the APL paper? You can’t begin to imagine. However, to help you vividly taste its
shortcomings, let me review a few of its more salient “arguments” (yes, these
are scare quotes). A warning, however, before I start. This is a long post. I
couldn’t stop myself once I got started. The bottom line is that the APL paper
is intellectual junk. If you believe me, then you need not read the rest. But
it might interest you to know just how bad a paper can be. Finding zero on a
scale can be very instructive (might this be why it is being published? Hmm).
The paper goes after what APL identify as five central
claims concerning UG: identifying syntactic categories, acquiring basic
morphosyntax, structure dependence, islands and binding. They claim to
“identify three distinct problems faced by proposals that include a role for
innate knowledge –linking, inadequate
data coverage, and redundancy…(6).” ‘Linking’ relates to
“how the learner can link …innate knowledge to the input language (6).”
‘Data-coverage’ refers to the empirical inadequacy of the proposed universals,
and ‘redundancy’ arises when a proposed UG principle proves to be accurate but
unnecessary as the same ground is covered by “learning procedures that must be
assumed by all accounts” and thus obviate the need “for the innate principle or
constraint” (7). APL’s claim is that all proposed UG principles suffer from one
or another of these failings.
Now far be it from me to defend the perfection of extant UG
proposals (btw, the principles APL discusses are vintage LGB conceptions, so I
will stick to these).
Even rabid defenders of the generative enterprise (e.g. me) can agree that the
project of defining the principles of UG is not yet complete. However, this is
not APL’s point: their claim is that the proposals are
obviously defective and
clearly
irreparable. Unfortunately, the paper contains not a single worthwhile
argument, though it does relentlessly deploy two argument forms: (i) The Argument
from copious citation (ACC), (ii) The Argument from unspecified alternatives
(AUA). It combines these two basic
tropes with one other: ignorance of the relevant GB literature. Let me
illustrate.
The first section is an attack on the assumption that we
need assume some innate specification of syntactic categories so as to explain
how children come to acquire them, e.g. N, V, A, P etc. APL’s point is that distributional analysis
suffices to ground categorization without this parametric assumption. Indeed,
the paper seems comfortable with the idea that the classical proposals critiqued
“seem to us to be largely along the right lines (16),” viz. that “[l]earners
will acquire whatever syntactic categories are present in a particular language
they are learning making use of both distributional …and semantic
similarities…between category members (16).” So what’s the problem? Well, it seems
that categories vary from language to language and that right now we don’t have
good stories on how to accommodate this range of variation. So, parametric theories
seeded by innate categories are incomplete and, given the conceded need for
distributional learning, not needed.
Interestingly, APL does not discuss how distributional learning
is supposed to achieve categorization. APL is probably assuming non-parametric
models of categorization. However, to function, these latter require
specifications of the relevant features that are exploited for categorization. APL,
like everyone else, assume (I suspect) that we humans follow principles like
“group words that denote objects together,” “group words that denote events
together,” “group words with similar “endings” together,” etc. APL’s point is
that these are not domain specific
and so not part of UG (see p.12). APL is fine with innate tendencies, just not
language particular ones like “tag words that denote objects as Nouns,” “tag words that denote events
as Verbs.” In short, APL’s point is that calling the groups acquired nouns,
verbs, etc. serves no apparent linguistic function . Or does it?
Answering this question requires asking why UG distinguishes
categories, e.g. nouns from verbs. What’s the purpose of distinguishing N or V in
UG? To ask this question another way: which GB module of UG cares about Ns, Vs,
etc? The only one that I can think of is the Case Module. This module identifies
(i) the expressions that require case (Nish things) (ii) those that assign it
(P and Vish things) and (iii) the configurations under which the assigners
assign case to the assignees (roughly government). I know of no other part of
UG that cares much about category labels.
If this is correct, what must an argument aiming to show
that UG need not natively specify categorical classes show? It requires showing
that the distributional facts that Case Theory (CT) concerns itself with can be
derived without such a specification. In other words, even if categorization
could take place without naming the categories categorized, APL would need to
show that the facts of CT could also be derived without mention of Ns and Vs
etc. APL doesn’t do any of this. In fact, APL does not appear to know that the
facts about CT are central to UG’s adverting to categorical features.
Let me put this point another way: Absent CT, UG would
function smoothly if it assigned arbitrary tags to word categories, viz. ‘1’,
‘2’ etc. However, given CT and its role
in regulating the distribution of nominals (and forcing movement) UG needs category
names. CT uses these to explain data like: *It
was believed John to be intelligent, or *Mary
to leave would be unwise or *John
hopes Bill to leave or *who do you
wanna kiss Bill vs who do you wanna
kiss. To argue against categories in UG requires deriving these kinds of
data without mention of N/V-like categories. In other words, it requires
deriving the principles of CT from non-domain specific procedures. I personally
doubt that this is easily done. But, maybe I am wrong. What I am not wrong
about is that absent this demonstration we can’t show that an innate
specification of categories is nugatory. As APL doesn't address these concerns
at all, its discussion is irrelevant to the question they purport to address.
There are other problems with APL’s argument: it has lots of
citations of “problems” pre-specifying the right categories (i.e. ACC), lots of
claims that all that is required is distributional analysis, but it contains no
specification of what the relevant features to be tracked are (i.e. AUA). Thus,
it is hard to know if they are right that the kinds of syntactic priors that
Pinker and Mintz (and Gleitman and Co. sadly absent from the APL discussion)
assume can be dispensed with.
But, all of this is somewhat besides the point given the earlier point: APL
doesn’t correctly identify the role that categories play in UG and so the presented
argument
even if correct doesn’t
address the relevant issues.
The second section deals with learning basic morphosyntax.
APL frames the problem in terms of divining the extension of notions like
SUBJECT and OBJECT in a given language. It claims that nativists require that
these notions be innately specified parts of UG because they are “too abstract
to be learned” (18).
I confess to being mystified by the problem so construed. In
GB world (the one that APL seem to be addressing), notions like SUBJECT and
OBJECT are not primitives of the theory. They are purely descriptive notions,
and have been since Aspects. So, at least in this little world, whether
such notions can be easily mapped to external input is not an important problem. What the GB version of UG does need is a
mapping to underlying structure (D-S(tructure)). This is the province of theta
theory, most particularly UTAH in some version. Once we have DS, the rest of UG
(viz. case theory, binding theory, ECP) regulate where the DPs will surface in
S-S(tructure).
So though GB versions of UG don’t worry about notions like
SUBJECT/OBJECT, they do need notions that allow the LAD to break into the
grammatical system. This requires primitives with epistemological priority (EP) (Chomsky’s term) that allow the LAD
to map PLD onto grammatical structure. Agent
and patient, seem suited to the task
(at least when suitably massaged as per Dowty and Baker). APL discusses Pinker’s version of this kind
of theory. Its problem with it? APL claims that there is no canonical mapping
of the kind that Pinker envisages that covers every language and every
construction within a language (20-21). APL cites work on split ergative
languages and notes that deep ergative languages like Dyirbal may be particularly
problematic. It further observes that many of these problems raised by these
languages might be mitigated by adding other factors (e.g. distributional
learning) to the basic learning mechanism. However, and this is the big point,
APL concludes that adding such learning obviates the need for anything like
UTAH.
APL’s whole discussion is very confused. As APL note, the
notions of UG are abstract. To engage it, we need a few notions that enjoy EP.
UTAH is necessary to map at least some
input smoothly to syntax (note: EP does not require that every input to the syntax be mapped via
UTAH to D-S). There need only be a core set of inputs that cleanly do so in
order to engage the syntactic system. Once primed other kinds of information
can be used to acquire a grammar. This is the kind of process that Pinker
describes. This obviates the need for a general
UTAH like mapping.
Interestingly APL agrees with Pinker’s point, but it bizarrely
concludes that this obviates the need for EPish notions altogether, i.e. for finding
a way to get the whole process started. However, the fact that other factors
can be used
once the system is
engaged does not mean that the system can be engaged without some way to get it
going. Given a starting point, we can move on. APL doesn’t explain how to get
the enterprise off the ground, which is too bad, as this is the main problem
that Pinker and UTAH addresses.
So once again, APL’s discussion fails to engage UG’s main worry: how to
initially map linguistic input onto DS so that UG can work its magic.
APL have a second beef with UTAH like assumptions. APL
asserts that there is just so much variation cross linguistically that there
really is NO possible canonical
mapping to DS to be had. What’s APL’s argument? Well, the ACC, argument by
citation. The paper cites resaearch that claims there is unbounded variation in
the mapping principles from theta roles to syntax and concludes that this is
indeed the case. However, as any moderately literate linguist knows, this is
hotly contested territory. Thus, to make the point APL wants to make responsibly requires adjudicating these
disputes. It requires discussing e.g. Baker’s and Legate’s work and showing
that their positions are wrong. It does not
suffice to note that some have argued
that UTAH like theories cannot work if others have argued that they can. Citation is not argumentation, though APL
appears to read as if it is. There has
been quite a bit of work on these topics within the standard tradition that APL
ignores (Why? Good question). The absence of any discussion renders APL’s conclusions
moot. The skepticism may be legitimate (i.e. it is not beside the point).
However, nothing APL says should lead any sane person to conclude that the
skepticism is warranted as the paper doesn’t exercise the due diligence
required to justify its conclusions. Assertions are a dime a dozen. Arguments
take work. APL seems to confuse the first for the second.
The first two sections of APL are weak. The last three
sections are embarrassing. In these, APL fully exploits AUAs and concludes that
principles of UG are unnecessary. Why? Because the observed effects of UG
principles can all be accounted for using pragmatic discourse principles that
boil down to the claim that “one cannot extract elements of an utterance that
are not asserted, but constitute background information” …and “hence that only
elements of a main clause can be extracted or questioned” (31-32). For the case
of structure dependence, APL supplements this pragmatic principle with the further
assertion that “to acquire a structure-dependent grammar, all a learner has to
do is to recognize that strings such as the
boy, the tall boy, war and happiness share both certain functional and –as a consequence-
distributional similarities” (34). Oh boy!! How bad is this? Let me count some of the ways.
First, there is no semantic or pragmatic reason for why back-grounded
information cannot be questioned. In fact, the contention is false. Consider
the Y/N question in (1) and appropriate negative responses in (2):
(1) Is
it the case that eagles that can fly can swim
(2) a.
No, eagles that can SING can swim
b. No eagles that can fly, can SING
Both (2a,b) are fine answers to the question in (1). Given
this, why can we form the question with answer (2b) as in (3a) but not the
question conforming to the answer in (2a) as in (3a)? Whatever is going on has nothing to do with whether it is possible
to question the content of relative clause subjects. Nor is it obvious how
“recogniz[ing] that strings such as the
boy, the tall boy, war and happiness share both certain functional …and distributional
similarlities” might help matters.
(3) a. *Can
eagles that fly can swim?
b.
Can eagles that can fly swim?
This is not a new point and it is amazing how little APL has
to say about it. In fact, the section on structure dependence quotes and seems
to concede all the points made in the Berwick et. al. 2011 paper (see
here).
Nonetheless APL concludes that there is no problem in explaining the structure
dependence of T to C if one assumes that back-grounded info is frozen for
pragmatic reasons. However, as this is obviously false, as a moment’s thought
will show, APL’s alternative “explanation” goes nowhere.
Furthermore, APL doesn’t really offer an account of how
back-grounded information might be relevant as the paper nowhere specifies what
back-grounded information is or in which contexts it appears. Nor does APL explicitly offer any pragmatic
principle that prevents establishing syntactic dependencies with back-grounded
information. APL has no trouble specifying the GB principles it critiques, so I
take the absence of a specification of the pragmatic theory to be quite
telling.
The only hint APL provides as to what it might intend (again
copious citations, just no actual proposal) is that because questions ask for new
information and back-grounded structure is old information it is impossible to
ask a question regarding old information (c.f. p. 42). However, this, if it’s
what APL has in mind (which, again is unclear as the paper never actually makes
the argument explicitly) is both false and irrelevant.
It is false because we can focus within a relative clause island,
the canonical example of a context where we find back-grounded info (c.f. (4a)).
Nonetheless, we cannot form the question (4b) for which (4a) would be an
appropriate answer. Why not? Note, it cannot be because we can’t focus within
islands, for we can as (4a) indicates.
(4) a. John
likes the man wearing the RED scarf
b.
*Which scarf does John like the man who wears?
Things get worse quickly. We know that there are languages
that in fact have no trouble asking questions (i.e. asking for new info) using
question words inside islands. Indeed, a good chunk of the last thirty years of
work on questions has involved wh-in-situ
languages like Chinese or Japanese where these kinds of questions are all perfectly
acceptable. You might think that APL’s claims concerning the pragmatic
inappropriateness of questions from back-grounded sources would discuss these
kinds of well-known cases. You might, but you would be wrong. Not a peep. Not a
word. It’s as if the authors didn’t even know such things were possible (nod
nod wink wink).
But it gets worse still: ever since forever (i.e. from Ross)
we know that Island effects per se
are not restricted to questions. The same things appear entirely with
structures having nothing to do with focus e.g. relativization and
topicalization to name two relevant constructions. These exhibit the very same
island effects that questions do, but in these constructions the manipulanda do
not involve focused information at all. If the problem is asking for new info from a back-grounded source,
then why can’t operations that target old
back-grounded information not form dependencies into the relative clause? The central fact about islands is that it
really doesn’t matter what the moved element means, you cannot move it out (‘move’ here denotes a particular
kind of grammatical operation). Thus, if you can’t form a question via movement,
you can’t relativize or tropicalize using movement either. APL does not seem
acquainted with this well-established point.
One could go on: e.g. resumptive pronouns can obviate island
effects but the analogous non-resumptive analogues do not despite semantic and
pragmatic informational equivalence, islands in languages like Swedish/Norwegian
do not allow extraction from any
island whatsoever, contrary to what PL suggests. All of this is relevant to
APL’s claims concerning islands. None of it is discussed, nor hinted at.
Without mention of these factors, APL once again fails to address the problems
that UG based accounts have worried about and discussed for the last 30 years.
As such, the critique advanced in this section on islands, is, once again,
largely irrelevant.
APL’s last section on binding theory (BT) is more of the
same. The account of principle C effects in cases like (4) relies on another
pragmatic principle, viz. that it is “pragmatically anomalous to use a full
lexical NP in part of the sentence that exists only to provide background
information” (48). It is extremely unclear what this might mean. However, on at least the most obvious
reading, it is either incorrect or much too weak to account for principle C
effects. Thus, one can easily get full NPs within back-grounded structure (e.g.
relative clauses like (4a)). But
within the
relative clause (i.e. within the domain of back-grounded information)
,
we still find principle C effects (contrast (4a,b)).
(5) a. John
met a woman who knows that Frank1 loves his1 mother
b.
* John met a woman who knows that he1 loves Frank’s1
mother
The discussion of principles A and B are no better. APL does
not explain how pragmatic principles explain why reflexives must be “close” to
their antecedents (*John said that Mary
loves himself or *John believes
him/heself is tall), why they cannot be anteceded by John in structures like John’s
mother upset himself (where the antecedent fails to c-command but is not in a clause), why they must be
preceded by their antecedents (*Mary
believes himself loves John) etc. In
other words, APL does not discuss BT and that facts that have motivated it at all
and so the paper provides no evidence for the conclusion that BT is redundant
and hence without explanatory heft.
This has been a long post. I am sorry. Let me end. APL is a
dreadful paper. There is nothing there. The question then is why did Perspectives accept it for publication? Why
would a linguistics venue accept such
a shoddy piece of work on linguistics
for publication? It’s a paper that displays no knowledge of the relevant
literature, and presents not a single argument (though assertions aplenty) for
its conclusions. Why would a journal sponsored by the LSA allow the linguistic
equivalent of flat-earthism to see the light of day under its imprimatur? I can
only think of only one reasonable explanation for this: the editors of Language have decided to experiment with
a journal that entirely does away with the review process. And I fear I am to
blame. The moral: always listen to your mother.