Friday saw the launching of the new Maryland Language
Science Center (here)
at UMD. As part of the festivities, Colin Phillips asked various people to try
and divine the next big questions. I was asked to prognosticate about syntax
and accepted only because I was assured that the performance was capped at five
minutes, thereby precluding any real opportunity for self-embarrassment. As
this fear abated, I came to enjoy thinking the question through, at least a
little bit, and as such I have decided to opine even more publically. What I really hope is that my coming out of
the intellectual closet will provoke others to do so as well. So here’s the
plan: I would like to invite you to join
me in thinking about “what’s next in X” where you fill in X with your own
subdomain of linguistics broadly construed (e.g. phonology, Morphology,
semantics, acquisition etc.). If you want to do so in the comments section, great.
But, if you want a little more space to expatiate (and room for some notes),
send me a 2-3 page Word document which I will happily post on your behalf on
the blog (if I like it) and let you handle comments in any way you wish.
This is the sort of thing that colleagues often do over a
couple of beers at conferences and workshops, right after dissecting
indispensible gossip and advancing the careers of deserving protégées. My
proposal is that we go public and let everyone in on our hunches. What do you
think are the questions we should be going after in the immediate future and
why? What are the problems that you see
as ripe for the picking now and why (especially why now?) How do you see the intellectual landscape in linguistics in
general or your sub-domain of interest over the next 5 years? 10 years? These are the sorts of questions we (or at
least I and my colleagues at UMD) have standardly asked those seeking tenure
track employment here. It only seems fair that if we are asking these questions
of others that we be ready to speculate ourselves. So without further ado, here’s a few of my
kickoff answers.
Let’s start with some basics. Syntax starts from the
observation that grammatical structure has two sources: (i) the combinatoric
system (viz. the rules) and (ii) the elements combined (viz. linguistic
atoms). IMO, Generative Grammar over the
last 50 years has discovered a bunch of invariant “laws” about the combinatoric
system that underlies human linguistic facility. The success of the program has
been in finding and articulating the fine structure of these “formal”
universals (e.g. bounding, binding, control, minimality, etc.). Given this, here are some projects that I
think should be on the immediate research agenda.
1. Unification
Minimalism (MP), as I see it, has aimed to unify the
disparate modules of UG. The project has been roughly analogous to what Chomsky
(in ‘On Wh Movement’) accomplished wrt Ross’s theory of islands. This time, the
“islands” are the modules of GB and the aim is to see how the laws of grammar
proprietary to each module are actually manifestations of a very small number
of simple principles. This project has been partially implemented. So for
example, (beware: the following is tendentious in that it reflects my views, not necessarily the field’s
consensus) Merge based accounts have unified the following modules: case,
theta, control, binding (in part)[1],
and phrase structure. However, large parts of GB remain outside this
Merge-unified framework. The two obvious ones are Island/bounding theory and
the ECP. Of these, phases can be
ornamented to incorporate bounding theory (i.e. there is a pretty easy
translation between phase theory and the old subjacency account of islands). It
has problems form a minimalist perspective, but it can at least be made to fit
in essentially the same way that it fit in GB.
It is far less clear how to fit the GB ECP into a minimalist
mold, especially the “argument/adjunct” asymmetries that were the focus of
discussion for well over a decade.[2]
It is still quite unclear, at least to me, how to code the differences between
those chains that require a very strict licensing condition (roughly that
provided by successive cyclic adjunct chains) and those that allow more liberal
licensing (roughly those that allow traces in comp to be deleted without
disrupting the licensing). It is even
unclear what the ECP is a condition on. In GB, it was on traces, null elements
considered to be grammatically problematic and so requiring licensing of some
sort. But with the Copy Theory eliminating traces, so within MP it is conceptually
unclear why something like the ECP is needed. How (or whether) to unify the
bounding and ECP modules with the others is, hence, a good problem for MPists
interested in grammatical unification.
2. Exceptions
to discovered invariances
On a more empirical front, it is worth considering how much
variation our GB principles tolerate. Why? Because where invariances break
down, learning is required and learning needs data that is plausibly often
absent. Let me explain by giving two examples.
It has been claimed that islands are not uniform
cross-linguistically. In the early days,
Italian and English were distinguished wrt to their inventory of bounding
nodes. Pretty quickly, however, we discovered that many of the sentences that
allowed extraction out of apparent islands in Italian, were pretty good in
English as well (Grimshaw noted this pretty early on). Recently, Dave Kush has done the same for CNPC/Relative
Clause (RC) violations in Swedish versus English. He has demonstrated pretty
conclusively, IMO, that the acceptability of some RC violations in Swedish
coincide with upgraded acceptability on analogous RC structures in English. In
other words, whatever is going on in Swedish is not limited to Swedish but
appears in English as well. This is a good result. Why? If it’s true, then it
is what we would expect to be true given pretty standard PoS considerations:
data concerning variability in extraction from RCs should be pretty sparse so
we should expect variation here. If Kush is right, we don’t find it. Good.[3]
Question: what of other claimed cases of variation, e.g.
fixed subject constraints (aka, that-t
effects)? Are they real or only apparent? This is worth nailing down and now is
a good time to do this.[4]
Why?
Well, I believe that settling these issues may require using
slightly more refined methods of data collection that has been our syntactic habit.
Kush, for example, did standard rating studies and used them to find that
despite differences in absolute ratings between Swedish and English speakers,
the relative improvements in the same contexts were comparable. This is not fancy stats, but it is, in this
case, very useful. Jon Sprouse used similar methods to provide further evidence
for the grammatical “reality” of islands.
At any rate, these methods are easy enough to apply and deciding how
much variation there actually is among the principles described in the GB modules
is important for PoS reasons.
3. Substantive
Universals
The GB achievements noted above concern formal universals.
From where I sit, we have had relatively little to say about substantive
universals. Recently, due to the work of
Cinque and colleagues, it has been proposed that there is a small(ish)
inventory of potential functional heads available for grammatical use and from
which grammars can select. More interesting still is the proposal that the
order of these heads is invariant. Thus, the relative hierarchical position of
these functional heads is constant cross linguistically, e.g.. T is always
above v and below C. Of course, were it only C, T, v then this might not be
that interesting. However, Cinque has considerably fattened up this basic
inventory and has provided reasons for thinking that the invariance extends to
the position of adverbs, modals, aspects and more. In effect, this is a slightly more
sophisticated version of the old universal base hypothesis. And if true, it is
very interesting. So, two questions: is it true? People are working on this
already. And, theoretically more interesting, why is it true, if it is? Thus,
for example, why exactly must the
hierarchy be C over T over v? There are
theories that treat the semantics as effectively conjunctive. Thus, a sentence
is effectively one long conjunction of properties. If this is so, why need the
conjuncts embed with Cish information higher than Tish higher than vish? Or,
why must theta domains be inside case domains, inside C-info domains?
It is, perhaps, worth further observing that this program is
in tension with MP. How? Well tt effectively enriches FL/UG with a bunch of linguistically very specific information. That’s part of what makes it so interesting.
4. Lexical
Atoms and Constructions
Scratch a thoroughly modern generativist and s/he will rail
against constructions. What GB showed is that these should be treated
epiphenomenally, as the interaction of simpler interacting operations and
conditions. However, constructions are still alive and well in lexical
semantics. So for example, we still tend to treat subcategorization as a reflex
of semantic selection, the former syntactically projecting “information” coded
in the latter, e.g. give projects
three DPish arguments, believe one DP
external argument and a propositional internal argument. This effectively reflects the view that
Lexical Items are “small” sentences.
This also reflects an effectively “Fregean” conception of lexical
meaning which divides the world semantically into n-ary predicates/concepts and
arguments/objects that saturate them.
Until recently, this has been the only game in town.
However, lately neo-Davidsonians have offered another “picture”: lexical items,
even predicates, are semantically very simple. All they do is denote
properties. Thus, syntactic structure is not a projection of lexical
information, but a construction from basically unstructured 1-place
predicates. What makes an object an
object interpretatively on this view is not that it saturates the internal y
variable, but that it has been merged with a predicate of events and has
thereby been type lifted in an event modifier.
The availability of these two different pictures of what
combination amounts to semantically has raised the question of what a lexical
item really is. Or more specifically: given that the properties of a linguistic
expression are the joint contribution of the contents of the items plus the
grammatical combinatorics, how much ought we attribute to each? IMO, a very
interesting abstract question that is ripe for theoretical investigation. There
is already empirical work that bears on these issues by Higginbotham, Kratzer,
Pietroski, Schein, (A) Williams a.o. However, the general implications of these
discussions have not been foregrounded as much as they deserve to be.
There is another reason to pursue this question.
Syntacticians have generally assumed that LIs are very simple and that the
action comes from the combinatorics.
However, it does not take that much tinkering with the contents of LIs to
get them to allow in through the lexical back door what the grammar prohibits. Alex C and Thomas G have walked us through
this for some feature passing technology. However, Paul Pietroski (over lunch)
noted that thinking about the long distance relations that lambdas can code,
coupled with a rich enough lexical content for the meaning of a terminal can
allow one to generate perfectly fine representations (using just simple merge/lambda
conversion) in which, say, The doctor
rode a horse from Texas could mean that the doctor was from Texas (not a
possible reading of this sentence).[5]
For our syntactic explanations to work, then, we need to make sure that they
don’t sneak in through the lexical back door and this means gaining a better
understanding on what our lexical primitives look like.
These are some of the questions and problems I’d like to see
tackled. I could give others, but, hopefully, this will grease the wheels.
Again, please feel free to add your 2 cents.
It’s a discussion that is both worth having and fun to have, or so I
hope. Looking forward to your input.
[1]
Here, I believe, there is still a big question of what to do about Principle B
effects. I’ve written on this elsewhere but the proposals don’t even meet my
low standards satisfactorily.
[2]
I put “argument/adjunct” in scare quotes for it is pretty clear that this is
not exactly the right cut, though it serves current purposes.
[3]
Kush notes that there are residual issues to be solved, but recall this
discussion is taking place over beer.
[4]
I mention this one as some UMD grad students are working on this now (Dustin
Chacon and Mike Fetters). It seems that the variation may not be quite as
reported.
[5]
Say the lexical content of ride is
roughly ly
lx
lz
[Agent (x,e) & ride (e) & patient (y,e) & from (x,z)]. We don’t
allow these right now. Why not? What
blocks these?
Here is a monster that's prowling outside the gates: how are the uses of socially conditioned variants of grammatically relevant things such as pronoun forms learned, in spite of the dim prospects for finite parameterization of the conditioning factors? For example, the endlessly subtle variations in the use of polite vs familiar pronoun forms in European languages, and different kinds of complications found in other places. such as systems of kinship depending on the three-way relationships between speaker, addressee and person referred to ('kunderbi'; http://wiki.pacific-credo.fr/images/0/04/A_Garde_Ph.D_Thesis_ch4-kunderbi.pdf, and various other sources).
ReplyDeleteIt would be terrific to have a discussion on the extended version of the Berwick/Hauser/Tattersall paper that had been mentioned a little while ago. Evaluating the minimalist solution of the language evolution problem should certainly be of interest to any BIOlinguist
ReplyDeleteI have this intuition that a lot of the current stuckness of syntax is due in part to the fact that we're beholden to particular frameworks which unintentionally sneak in a lot of assumptions. We use these as tools for thinking about the phenomena, but they're really not generic tools at all, and I think that prevents us from really addressing the issues. There's a lot of stuff at the bottom of these frameworks -- the primitive, low level stuff -- which are assumed, but which also force us to think about problems in a very particular way, and I suspect that if we could pull back to a more general theory, we'd make some more progress. So I'd like to think that what's next is a de-framework-ification, but who knows.
ReplyDelete