I’ve been
invited at the last minute to a conference in May in Athens (yes I am gloating)
on the future of Generative Syntax. The organizers “reached out” to me (no
doubt after several more worthy people turned them down) and offered me a deal
that I could not refuse. So I took it. What follows is a how the organizers
(Artemis Alexiadou, Marcel den Dikken, Winfried Lechner, Terje Lohndal and
Peter Svenonius) describe the shindig. What they don’t include here is a
personal promise from Terje of “good Greek food,” something that I personally
plan to hold him to. So here’s the manifest and some stuff that I sent them in
answer to the questions at the end.
***
Description:
Generative Syntax in the Twenty-First Century: The
Road Ahead will be a 3-day round-table taking stock of generative syntax and
discussing the future of the field. It will take place in Athens, Greece, and
feature discussions and a poster session.
Goals:
We want to incite a high-level
discussion of foundational issues with a group practical in size and with a
reasonable number of shared background assumptions in hopes of producing a
concrete result in the space of three days. Ideally we are aiming for a white
paper which will reaffirm the theoretical core of the discipline; that is,
outline major assumptions and concepts that we believe are shared by most
transformational generative syntacticians today. We think this may be helpful
for the field in addressing the three challenges mentioned below.
We also want to identify major
outstanding research questions. We want to attempt to identify the major
burning questions concerning syntax and its interfaces. This is not in order to
determine the research agenda of individual researchers. Rather, we believe
that it is part and parcel of taking stock to also think about what lies ahead.
In addition to plenary and group
discussions, there will be a poster session, in which in particular young and
early-career-stage researchers will be encouraged to participate. We very much
want to hear what these are working on, and in addition we think they will make
valuable contributions to the plenary discussions.
Rationale:
Generative syntax has made important
contributions to our understanding of language, and with it, the human mind.
The field continues to be fecund and vibrant and new discoveries and
developments continue apace. However, the rapid growth and development of this
still-young field leaves it without a clear and uncontroversial canon,
especially in syntax. In principle, there is nothing wrong with this. However,
it raises a few challenges, three of which we will briefly outline here.
A major challenge concerns the
coherence of the field. Given the large number of different analytic
approaches, it has resulted in small groups working on x, y, or z. From a
scientific point of view, this is not problematic, but it raises difficulties
when it comes to interaction, funding, recruitment and external visibility. We
want to discuss ways of improving this situation. We believe that this is
especially important given that linguistics and generative syntax are not major
fields compared to e.g., psychology or physics. In addition to being
problematic in its own right, the proliferation of approaches further
exacerbates the problem of teaching and supervision.
Another challenge is related to
teaching and supervision. During the time when Government and Binding (GB) was
pursued, Liliane Haegeman’s and Andrew Radford’s widely used textbooks were
sources that quickly enabled students to read original research papers. Given
the proliferation of different assumptions within the Minimalist Program (MP),
the situation is different today. Different textbooks build on different
assumptions, and they differ significantly when it comes to how much they
explain the transition from GB to the MP. This in turn makes it increasingly
difficult for students to make the jump from reading textbooks to the original
research literature. Our impression is that this was easier two decades ago and
we would like to discuss if it is possible to fix this.
A third challenge is related to
publications. Because minimalist syntacticians generally cannot rely on a
shared core of hypotheses and principles, each paper has to build its case from
the ground up. This has already resulted in extremely long papers, much longer
than in most other sciences. It is not clear that this is benefitting the
field.
Background
The project originated from a
discussion concerning ways in which a conference could be organized in Greece
in order to signal support for the linguistics community in the southern
Balkans, a group of people who has severe difficulties attending conferences
and partake in discussions due to the current economic situation; money for
research-related activities is all but gone and salary cuts have been severe.
So a guiding idea was to bring a conference to Greece. Along with the potential
benefits the event might have on a local level, another motive in the
background was the ongoing pursuit of strategies for getting EU-level research
funding for collaborative projects with Greek linguists.
***
The organizers also included some questions for the
participants to address. Here they are:
1.
Strengths and Weaknesses
A.
What has been the main strengths of generative
syntactic research, with particular emphasis on the early 21st
century, and what do you think is worng with the field of Generative syntax
today?
B.
How do you think the field could/should go about
addressing the current problems?
2.
Central unresolved theoretical issues
A.
What are the major open questions in the field of
generative syntax today?
B.
What is or ought (not) to be in the field’s
theoretical core?
3.
Syntax in relation to other fields of linguistic
inquiry
A.
What are the main success stories and bottlenecks in
the interaction between syntax and other core-theoretical sub-disciplines
(semantics, phonology, morphology)?
B.
What are the main success stories and bottlenecks in
the interaction between syntax and the experimental sub-disciplines (language
acquisition, sentence processing and neurolingusitics), and how can syntax be
more useful to those?
4.
The road ahead
A.
What do you see as the biggest challenges for generative
syntactic research in the coming years/decades?
B.
In which direction(s) would you like to see the field
proceed, and where would you like the filed to be in ten or twenty years time?
These questions are clearly on the same wave length as
those solicited Hubert Haider and Hnek van Riemsdijk for their Hilbert Porject.
They are excellent questions. I am publishing the questions here to solicit
some suggestions from FoL readers. I am told that crowd sourcing and the wisdom
of crowds is all the rage. To help matters along I finish this with some
answers that I sent along to the organizers as per instruction.
***
1. Strengths and Weaknesses:
Intellectually,
contemporary Generative Grammar (GG) is unbelievably healthy. Due to the work
in the mid to late 20th century, GG has an impressive body of
doctrine and mid level laws that are empirically well grounded. We identify
these “laws” as “effects,” e.g. binding effects, island effects, control
effects S/WCO effects, etc. This is an enviable accomplishment and one that
should not ever be forgotten or diminished. Don’t misunderstand: these “laws”
are not prefect, but they are very very impressive.
In
addition, there continues to be excellent typological work refining these laws
and extending their reach. The intensive
cross-linguistic study of various languages and their Grammars (G) that began
in earnest in the mid 1980s continues even stronger today. I doubt that there has ever been as rich and
varied a group of grammatical descriptions as exists today. This is really subtle and excellent work and
it has immensely enriched our understanding about the variety of ways that G
structure gets realized.
So,
as far as the descriptive/typological enterprise goes (the one that explores
the variety of Gs), things are better than ever. However, I believe that GG has
decided, consciously or not, to narrow its vision and as a result linguistic theory has gone into abeyance. What do I
mean by this?
It
helps to start by asking what the subject matter of linguistics is. There are
two related enterprises; descriptions of native speaker’s particular Gs and
descriptions of human capacity to acquire Gs. The latter aims, in effect, to
describe FL. There have been three strategies for pursuing this inquiry in GG
history:
1.
Inferring
properties of FL from G’s up
2.
Inferring
properties of FL via Plato’s Problem (PP)
3.
Inferring
Properties of FL via Darwin’s Problem (DP)
Of
these the only currently widely pursued route to FL is via 1, the
typological-comparative strategy. Both 2 and 3 are currently, IMO, very weak.
Indeed, PP considerations have largely disappeared from the research agenda of
syntacticians (we have offloaded some of this to students of real time
acquisition, but this is a slightly different enterprise) and DP is at best
boilerplate.
Moreover,
I don’t believe that using route-1 is sufficient to get a decent account of FL,
as there is an inherent limitation to scaling up from Gs to FL. The problem is
akin to confusing Chomsky and Greenberg universals. A design feature of FL need
not leave overt footprints in every G (e.g. island effects will be absent in Gs
without movement) so the idea that one can determine the basic properties of FL
by taking the intersection of features present in every G likely is a failing
strategy.
This
does not mean that route-1 is unimportant for investigating FL (i.e. UG). But
it does imply that it won’t focus on questions and properties of FL that
strategy 2 and 3 will. Thus it is, at best, IMO, incomplete.
Let
me illustrate what I mean. As Chomsky has rightly emphasized, there is always a
tension between descriptive and explanatory adequacy. The wider the range of
descriptive tools at our disposal, the easier it is to make distinctions among
various phenomena and the easier it is to cover divergent data points. The secondary place of theory is evident in
the reluctance to ever dispose of a G mechanism. Let me give you an example.
Early Minimalism distinguished between interpretable and uninterpretable features.
For various conceptual reasons it was argued that it would be better to to
substitute valued and unvalued for interpretable and uninterpretable (I never
found these areguments that convincing, but that it not the relevant point
here). Ok, so in place of +/-I we get +/-valued. As it happened, the
substitution of valued for interpretable had a very short half life and now
many (most?) GGers take Gs to exploit BOTH notions. So in place of a two-way
distinction, we now have a four-way distinction. Not surprisingly, having two
extra ways of cutting up the pie allows for greater empirical suppleness. Have
the theoretical concerns of complicating things this way been addressed? Not to
my knowledge despite this theoretical inflation’s ramifications both for PP and
DP.
This
is but one example. I think that this is inflationary two step is very common.
A suggests principle P. B suggests replacing P with P’. C comes along and
argues that both P and P’ are required. Rinse and repeat. There is no doubt
that such conceptual inflation can have payoffs. However, there is nary a word
about the theoretical costs of doing this. Or, to put this more pointedly: the
empirical costs of remaining conceptually modest are always tallied while the
explanatory costs of inflating the basic theory are rarely considered. This
tilts inquiry away from theoretical work.
There
are other examples. My favorite hobbyhorse regards the huge overlap between
AGREE as a Probe Goal relation and I-merge. But I will save this for the next
question.
Let
me end with a general point, a practical suggestion and an exhortation.
Nobody
denies the importance of category-1 work like that above. However, category 2,
and 3 are also important. We should insist that PP and DP questions get asked
for all proposals. When an analysis is delivered we should ask how the relevant
grammatical operations could be acquired. We should ask concerning the
operations how they bear on DP. We should not treat G descriptions as ends in
themselves but as way stations to the deeper question of how FL is organized.
And even if we cannot deliver answers, should insist that these concerns not be shoved aside and forgotten. In
the best of all possible worlds, we should even be ready to live with some
recalcitrant data points rather than expand the theoretical apparatus in PP or
DP unattractive ways.
The
suggestion: I think we need to retrain aspiring GGers to understand PP and DP.
If my own experience is any indication, many GGers have problems deploying a
PoS argument. Thus, the practical consequences of PP and DP reasoning for the
practicing syntactician are far from clear. I suggest making this be part of
any discussion. We may decide to put it aside, but foregrounding these concerns
may have the beneficial effect of advancing respect for explanatory adequacy.
It should also provide theoretical grounds for rejecting formal inflation. This
would in itself be a very good thing.
The
exhortation: I began by noting the tremendous intellectual accomplishments of
GG. To repeat: GG has an impressive set of results. Moreover, these results
should serve as starting points for furthering GG’s investigation of the fine
structure of the Faculty of Language (FL). We now have results we can build on
theoretically. So, in the best sense, there has never been a better time for
doing good theoretical work that addresses PP and DP concerns, the problems
that got many of us (e.g. me) into GG to begin with.
2.
Central unresolved theoretical issues
There
are a whole bunch. But in the minimalist setting I think that three immediate
ones that stand out in my mind.
First,
what to do about islands and the “antecedent government” parts of the ECP? Island theory (aka Subjacency and Barriers)
used to be the jewel in the crown of linguistic theory. Within minimalism,
islands are theoretically poorly understood.[1] The deficiency is four
fold:
1.
Phases,
as currently understood, do not comfortably accommodate islands. Many have
noted this, but it doesn’t make it less true. There is a straightforward way of
translating island effects via
subjacency theory into phase terms (and the translation is not any worse than
the earlier subjacency account) but it is not better either. This means that we
have gained no insight into the structure of islands that removes their linguistic
particularity (thereby mamking them DP problems). Here are some questions: are
Ds phase heads? If so, how do they differ from v and C? Why can’t they have
edges that can be used for escaping islands? Why are C, v and D phase heads?
These are all well-known questions to which we have offered no very
enlightening answers.
2.
Though
it is possible to translate subjacency effects into phase terms, it is much
harder to do the same for ECP effects. First there is the problem of what ECP
effects are effects of. The ECP was a trace licensing principle. Traces within
GB were considered in need of “help” due to their lexical anemia. But, there
are no traces in minimalist theory, only copies/occurences. Why then are some
copies/occurences invidiously distinguished (e.g. why are arguments being more
leniently treated than adjuncts?) ?
Moreover, there is some evidence (mainly from Lasnik and his students)
that whereas argument island effects can be obviated via ellipsis, this seems
less true for adjuncts. Why? What’s it mean to say that “LF” well formedness
conditions apply to adjuncts but not arguments? Moreover, why are the domains
relevant to the ECP so like those relevant for islands? It seems like these
should really be treated in the very same way, but it appears that they are
not. In other words, ECP effects seem like they should reduce to subjacency
effects but they appear not to. What’s up?
3.
Why
are island effects restricted to movement dependencies? Why doesn’t binding
obey islands? Are these really PF effects and if so in what way are they such?
Do islands only apply to movement chains that terminate in a phonetically null
deleted copy and if so why? These are
all really the same question. We have had some insights into these questions
from the ellipsis literature, but there is lots that we still don’t really
understand. But what would be really nice to know is what exactly it is about a
chain’s terminating in a gap that matters to islandhood.
4.
As
Luigi Rizzi has pointed out, there appears to be a kind of redundancy between
phases and minimality. Do we require both notions? Can we reduce one
restriction to the other? How much is long movement via intermediate positions
feature driven? And if it is, can we use these features to explain movement
limitations via something like relativized minimality?
Second,
what to do about AGREE and I-merge. I mentioned this before, but it strikes me
that the Probe-goal technology and AGREE based Gs that also include I-merge are
massively redundant. How so? Well the “distances” spanned by movement
(A-movement in particular) are often the same as those that show up in
agreement configurations. This is obvious for the many cases of agreement were
the surface configuration is spec-head. But it is even true for the cases of
inverse agreement. It is seldom the case that the span of an inverse agreement
relation differs from that found in overt movement. This is why, I believe,
Movement is often analyzed as AGREE+EPP.
I
find this very problematic. It seems to me to go against the great (late)
minimalist idea that movement and structure building are flip sides of the same
coin. It undermines the idea that movement is an expected design feature of Gs.
Why? Because, if movement is parasitic on AGREE then there is little reason to
expect to see it. There already exists a way of establishing non local
dependencies between expressions in a phrase marker, AGREE, and it can apply
independent of I-merge so why does I-merge ever apply. Our current answer is
the EPP. In other words, it applies because
it does apply. True, but not enlightening. Let me put this another way: Early
minimalism took movement to be a design flaw. Later minimalism took it to be an
inevitable by-product of the simplest version of Merge. Current theory takes it
to be a design flaw of the perfect theory? Hmm.
What’s
the alternative? Well, that there is no long distance AGREE operation. All
non-local dependencies are mediated via I-merge. This is the “old” minimalist
idea that feature checking is in Spec-head configurations. Agree was thus a
rather local head to head relation that takes place within a restricted domain.
This idea was, IMO, hastily abandoned at the expense of the minimalist conceit
that aimed to reduce the theoretical machinery so as to make it more DP
tractable. However, given a copy theory of movement and a single cycle theory
it is possible to mimic long distance agree with movement and deletion of the
higher copy. Given that this is probably required anyhow for other cases, why
introduce a novel operation, AGREE, and a novel dependency (probe-goal) as
primitive features of Gs?
Responding
to these kinds of Darwin-Problem arguments (and eliminating AGREE as a
primitive operation) has serious theoretical consequences, which may indicate
that this is not a tenable move. For example, here are two:
1.
We
will need headedness in the syntax (note just at the CI interface for
interpretation as some currently assume (e.g. Chomsky)) to make it work. Thus
labeling will be a syntactic operation as it is required to allow heads to
locally converse. This is not currently the rage, at least not if we go by
Chomsky’s latest “Problems of Projection” papers on the issue (which, I should
confess, I am not moved by at all, but that is a topic for another discussion).
To be clear: moving in this direction might complicate the minimalist conceit
of treating Merge as the magic sauce that launched FL. If labeling is a basic
syntactic operation, then Merge alone does not suffice. I believe that there
are ways of navigating these theoretical shoals, but adding labeling is not theoretically
innocuous.
2.
Anaphoric
dependencies are not products of feature agreement. I think that so
understanding anaphora is a bad idea on minimalist grounds. Nonetheless, if
AGREE does not exist as a grammatical operation then treating binding and
anaphora in terms of AGREE is simply a non-starter. As many of you know, this
is perhaps the most popular way of analyzing anaphoric dependence.
In
sum, I think that the theoretical cost of allowing AGREE as an operation
additional to I-merge has theoretical costs that we have not carefully
considered, and we should.
There
are many more hot questions (e.g. should we expect a minimalist FL to be
modular?, What exactly is a syntactic feature and why do they exist?, What role
if any does morphology play in the syntax (e.g. does it drive operations or is
it a relatively useless by-product?)?) but I am happy to submit the above two
as my prime candidates for current consideration.
3.
Synatx
and..
There is lots of low hanging fruit
in the combination of syntax with work in psycho-linguistics. There is even an
interesting high level question relating them: how transparent is the relation
between Gs and the systems that put these Gs to use in real time processes. If
this is correct, then it provides another window into the basic primitives of
FL. This has been very interesting work done on this question. Syntacticians
can do quite a bit to help it along. I know this from personal experience given
what we do at UMD. For those interested in this, I have reviewed a bunch of this
stuff on Faculty of Language so look there. Colin Phillips, Jeff Lidz, Paul
Pietroski have done lots of interesting work showing how to combine good formal
linguistics with interesting psycho-linguistics. I even think that there are
interesting argument forms out there that may be of interest to syntacticians
concerning, for example, the right notions of locality or the right format for
the semantics of determiners).
What
can syntacticians do to help this along? Well, most importantly, learn how to talk
to psycho-types. To do most of the work they are doing it doesn’t really matter
which vintage syntactic theory they exploit. If you want to study the
processing or acquisition of binding, for example, GB binding theory is almost
always sufficient. The newest fangled grammatical devices are not always the
most helpful. So, don’t feed your psycho-friends the latest minimalist wisdom
when a good easy to use GB analyses is all that they really need.
Second,
be ready to think of things from the psycho point of view. Here’s what I mean.
IMO, results in syntax are far better grounded than almost any result in
psycho. As a result, syntacticians reasonably find it implausible that results
from psycho might have consequences for syntax. However, taking transparency
seriously lends itself to the unsettling conclusion that results using psycho
techniques might provide arguments for rearranging our syntactic theories. We
should be ready to consider this a real option and even help develop such
arguments. IMO, they are not quite ready for prime time, but they are getting
very interesting and, in some areas (e.g. on determiner meaning) have provided
strong reasons for preferring some representations over others.
4.
I’ve
answered 4 in various ways above. So I won’t repeat myself.
So
that’s it. Fun in Athens in May. Can’t wait.
[1]
Most curious to my mind is that successive cyclicity, which was in GB very
tightly tied to islands, is now largely divorced from such conscerns.