I again mis-spelled someone's name. I "called" Winnie, "Wini." It is now corrected and I am sorry. However, the misspelling allows me to once again thank Winnie for all his great work in getting the Athens gig going. Thx and sorry.
******
I am currently sitting on the 6th floor of the Athen’s Way Hotel, eating some fruit and yogurt and sipping a cup of tea. It’s a beautiful day. The Road Ahead Conference (see here) ended yesterday and I thought that I would jot down some quick comments. Before doing so, let me take this opportunity to thank the organizers (Winnie Lechner, Marcel den Dikken, Terje Londahl, Artemis Alexiadou and Peter Svenonius) for a wonderful event. It’s been a blast and my most secure conclusion from the past three days is that if you ever get invited anywhere to do anything by any of these people GO! Thx, and I am sure that here I am not speaking just for myself but for all participants. That pleasant task out of the way, here are a few impressions. In this post I’ll talk about very general matters. In some follow up posts I’ll remark on some of the socio-political issues raised and what we might do to address them. And in some yet later posts I’ll discuss some of the stimulating questions raised and ideas mooted. So let’s start.
******
I am currently sitting on the 6th floor of the Athen’s Way Hotel, eating some fruit and yogurt and sipping a cup of tea. It’s a beautiful day. The Road Ahead Conference (see here) ended yesterday and I thought that I would jot down some quick comments. Before doing so, let me take this opportunity to thank the organizers (Winnie Lechner, Marcel den Dikken, Terje Londahl, Artemis Alexiadou and Peter Svenonius) for a wonderful event. It’s been a blast and my most secure conclusion from the past three days is that if you ever get invited anywhere to do anything by any of these people GO! Thx, and I am sure that here I am not speaking just for myself but for all participants. That pleasant task out of the way, here are a few impressions. In this post I’ll talk about very general matters. In some follow up posts I’ll remark on some of the socio-political issues raised and what we might do to address them. And in some yet later posts I’ll discuss some of the stimulating questions raised and ideas mooted. So let’s start.
My overall impression, one that runs quite counter to some
of the pessimism I often hear from colleagues about the state of contemporary
syntax, is that intellectually speaking,
we are in a golden age of syntax (though politically and sociologically, not so
much (I return to this in later posts)).
What do I mean? Well, the organizers invited a very talented group of
people (present company excluded, of course) doing a pretty wide cross section
of syntactic work. What is very clear is that there is a huge amount of
excellent work being done on a wide variety of languages on a wide variety of
topics. Typology and morpho-syntax are particularly hot areas of current
interest, but syntacticians (in the wide sense meaning those with a good
knowledge of syntactic theory and methods) are also heavily involved in
“conjunctive” areas such as syntax + language acquisition, syntax + language
impairment/disorders, syntax + processing, a.o. As a result, we now know more
about the details and overall architectures of more grammars and have better
models of how this grammatical knowledge arises and is put to use in more areas
than ever before. Don’t get me wrong: it
is clear to all that there is a tremendous amount that we still do not know,
even about fundamental issues, but to someone like me, who has been doing this
stuff (or at least listening to people who do this stuff) for the last 40+
years, it is clear how much we have learned, and it is very very impressive.
Furthermore, it is also clear that we live in a time where all kinds of syntactic work can be
fruitfully pursued. What do I mean by “all”?
Well, there are roughly three kinds of syntactic investigations: (i)
there is work on the structure of particular Gs, (ii) there is work on the
structure of UGs based on work of particular Gs and (iii) there is work on the
structure of FL/UG based on the particulars of UG. (i) aims at careful
description of the Gs of a given language (e.g. how does agreement work in L,
how are RCs formed? How does binding work?). (ii) aims to find the features
that delimit the range of possible Gs in part by distilling out the common
features of particular Gs. (iii) aims to simplify UG in part by unifying the
principles discovered by (ii) and in part by relating/reducing/unifying these
features with more general features of cognition and computation. All
three kinds of work are important and valuable. And though I want to plead
specially for (iii) towards the end, I want to be very very very clear that
this is NOT because I disvalue (i) or
(ii) or that I think that (iii) like work is inherently better than the others.
I don’t and never did. My main take-away from Athens is that there has never
been a better time to do all three kinds of work. I will return to a discussion
of (iii) for I believe the possibility of doing it fruitfully is somewhat of a
novelty and the field has not entirely understood how to accommodate it. But I
return to this. First the other two.
Let’s start with (i). If your heart gravitates towards
descriptive endeavors, there are many many entirely unexplored languages out
there waiting for your skills, and there are now well developed methods and
paradigms ready for you to apply (and modify). Indeed, one of the obvious
advances GG has made has been to provide a remarkably subtle and powerful set
of descriptive techniques (based of course on plenty of earlier theory) for
uncovering linguistically (i.e. grammatically) significant phenomena.
Typologists who fail to avail themselves of this technology are simply going to
mis-describe the individual languages
they investigate,[1]
let alone have nothing to say about the more general issues relating to the
variety of structures that natural languages display.[2]
Similarly if your interests are typological you are also in
luck. Though there are many more language families to investigate, many have
been looked at in great detail and Generative linguists have made a very good
start at limning the structural generalizations that cut across them. We now
have mapped out more properties of the grammar of case and agreement (at both
the specific and general levels) of more languages than ever before. We have
even begun to articulate solid mid level generalizations that cut across wide
swaths of these languages (and even some language families) enabling a more
sophisticated exploration of the parametric options that UG makes available. To
me, someone interested in these concerns but not an active participant in the
process, the theoretical speculations seemed extremely exciting (especially
those linking parameter theory to language change and language acquisition).
And though the trading relation between micro vs macro variation has not yet
been entirely sorted out (and this impression may be an optimistic appraisal of
the discussions I (over)heard) it is pretty clear that there is a thoughtful
research agenda about how to proceed to attack these big theoretical issues.
So, Generative Syntax (including here morpho-syntax) in this domain is doing
very very well.
Concerning (ii): A very nice outcome of the Athens
get-together was the wide consensus regarding what Amy Rose Deal dubbed
“Mid-Level Generalizations” (MLG). I have frequently referred to these as the
findings of GB, but I think that MLGs is a better moniker for it recognizes
that these generalizations are not the exclusive property of any specific
research tradition. So though I have tried to indicate that I consider the
differences between GB and LFG and HPSG and TAG and… to have been more notational
than notional, I adopt ARD’s suggestion that it is better to adopt a more
neutral naming convention so as to be more inclusive (does this count as PC?).
So from now on, MLGs it is! At any rate,
there is a pretty broad consensus among syntacticians about what these MLGs are
(see here for a partial enumeration, in, ahem GB terms). And this consensus
(based on the discovery of these MLGs) has made possible a third kind of
syntactic investigation, what I would dub pure
theoretical syntax (PTS).[3]
Now, I don’t want to raise hackles with this term, I just need a way of
distinguishing this kind of work from other things we call “theory.” I do not
mean to praise this kind of “theory” and demean the other. I really want to
just make room for the enterprise mentioned in (iii) by identifying it.
So what is PTS? It is directed at unifying the MLGs. The
great example of this in the GB tradition is Chomsky’s unification of Ross’s
island effects in “On Wh Movement” (OWM).[4]
Chomsky’s project was made feasible by Ross’s discoveries of the PLGs we (after
him) call “islands.” Chomsky showed how to treat these apparently disparate
configurations as instances of the same underlying system and in the process
removed the notion of construction from the fundamental inventory of UG. This
theoretical achievement in unification led to the discovery of successive
cyclicity effects, to the discovery that adjuncts were different from arguments
(both in how they move and how porous they are) and to the discovery of novel
locality effects (the ECP and CED in particular).
Someone at the conference (I cannot recall who) mentioned
that Chomsky’s work here was apparently un-motivated by empirical concerns.
There is a sense in which I believe this to be correct, and one in which it is
not. It is incorrect in that Ross’s islands, which were the target of Chomsky’s
efforts at unification, are MLGs and so based on a rich set of language
specific data (e.g. *Who did you meet someone who likes) which hold in a
variety of languages. However, in another sense it was not. In particular,
Chomsky did not aim to address novel particular data beyond Ross’s islands. In
other words, the achievement in OWM was the unification itself. Chomsky did not
further argue that this unification got us novel new data in, say, Hungarian.
Of course others went on to show that whatever Chomsky wanted to do the
unification had empirical legs. Indeed, the whole successive cyclicity industry
starting with Kayne and Pollock on stylistic inversion and proceeding through
McCloskey’s work on Irish, Chung’s on Chamorro, Torrego’s on Spanish and many
many others was based on this unification. However, Chomsky’s work was
theoretical in that its main concern was to provide a theory of Ross’s MLGs and
little (sic!) more.
Indeed, one can go a little further here. Chomsky’s
unification had an odd consequence in the context of Ross’s work. It proposed a
“new” island that Ross himself extensively argued against the existence of. I
am talking, of course, about Wh-islands, which Ross found to be highly
acceptable, especially if the clausal complement was non-finite. Chomsky’s
theory had to include these in his inventory despite Ross’s quite reasonable empirical
demurrals (voiced regularly ever since) because they followed from the unification.[5]
So, it is arguable (or was at the time), that Chomsky’s unification was less empirically adequate than Ross’s
description of islands. Nor was this the only empirical stumble that the
unification arguably suffered. It also predicted (at least in its basic form)
that who did you read a book about is
ungrammatical, which flies in the face of its evident acceptability. We now know that many of these stumbles led
to interesting further research (Rizzi on parameters for example), but it is
worth noting that the paper, now a deserved classic, did not emerge without
evident empirical problems.
Why is it worth noting this? Because it demonstrates a
typical feature of theoretical work; it makes novel connections, leading to new
kinds of research/data but often fails to initially (or even ever) cover all of the previously “relevant” data.
In other words, unification can incur an apparent empirical cost. Its virtue
lies in the conceptual ties it makes, putting things together that look very
different and this is a virtue even if some data might be initially (or even
permanently) lost. And this historical lesson has a moral: theoretical work,
work that in hindsight we consider invaluable, can start out its life
empirically hobbled. And we need to understand this if we are to allow it to
thrive. We need slightly different measures for evaluating such work. In
particular, I suggest that we look to this work more for its ‘ahaa’ effects
than for whether it covers the data points that we had presupposed, until its
emergence, to be relevant.
Let me make this point a slightly different way. Work like
(iii) aims to understand how something is possible not how it is actually. Now,
of course to be actual it is useful to be possible. However, many possible
things are not actual. That said, it is often the case that we don’t really see
how to unify things that look very different, and this is especially true when
MLGs are being considered (e.g. case theory really doesn’t look much like
binding theory and control does has very different properties from raising).
And here is where theory comes in: it aims to develop ways of seeing how two
things that look very different might
be the same; how might they possibly
be connected. I want to further observe that this is not always easy to
do. However, by the nature of the
enterprise, the possible often only coarsely fits the actual, at least until
empirical tailoring has had a chance to apply. This is why a some empirical
indulgence is condign.
So you may be wondering why I got off on this jag in a post
about the wonders of the Athens’ conference. The reason is that one of the
things that make this period of linguistics such a golden age is that the large
budget of MLGs the field seems to recognize makes it ripe for the theoretically
ambitious to play their unificational games. And for this work to survive (or even see the light of day) will
require some indulgence on the part of my more empirically conscious
colleagues. In particular, I believe that theoretical work will need to evaluated
differently (at least in the short and medium run) from the two other kinds of
work that I alluded to above, where empirical coverage is reasonably seen as
the primary evaluative hurdle.
More specifically, we all want our language particular
descriptions and MLGs to be empirically tight (though even fashioning MLGs some
indulgence (i.e. tolerance for “exceptions”) is often advisable), elegance be
damned. But we want our theories to be simple (i.e. elegant and conceptually
integrated and natural), and it is important to recognize that this is a virtue
even in the face of empirical leakage. Given that we have entered a period
where good theory is possible and desirable, we need to be mindful of this or
risk crushing theoretical initiative altogether.[6]
As you may have guessed, part of why I write the last
section was because the one thing that I felt was missing at Athens was the
realization that this kind of indulgence is now more urgent than ever. Yours
truly tried to argue the virtues of making Plato’s Problem (PP) and Darwin’s
Problem (DP) central to contemporary research. The reaction, as I saw it (and I
might be wrong here), was that such thinking did not really get one very far,
that it is possible to do all theoretical work without worrying about these
inconclusive problems. It seemed to me that to the degree that PP was
acknowledged to be important, it struck me that the consensus was that we
should off load acquisition concerns to the professionals in psych (though, of
course, they should consult us closely in the process). The general tone seemed
to be that eyeballing a proposal for PP compatibility was just self-indulgence,
if not worse. The case of DP was even worse. It was taken as too unspecified to
even worry about and anyhow general methodological concerns for simplicity and
explanation should prove robust enough without having to indulge in pointless
evolutionary speculation of the cursory variety available to us.
I actually agree with some version of each of these points.
Linguists cannot explore the fine details of PP without indulging in some
psychology requiring methods not typically part of the syntactician’s technical
armamentarium.[7]
And, I agree that right now what we know about evolution is very unlikely to
play a substantive role in our theorizing. However, PP and DP serve to vividly
bring before the mind two important projects: (i) that the object of study in
GG is FL and its fine structure and (ii) that theoretical unification is a
virtue in pursuing (i). PP and DP serve to highlight these two features, and as
these are often, IMO, lost sight of, this is an excellent thing.
Moreover, at least with PP, it is not correct that we cannot
eyeball a proposal to get a sense of whether it will pass platonic muster. In
fact, in my experience many of the thoughtful professionals often get their
cues regarding hard/interesting problems by first going through simple the PoS
logic implicit in a standard syntactic analysis.[8]
This simple PP analysis lays the groundwork for more refined considerations.
IMO, one of the problems with current syntactic pedagogy is that we don’t teach
of our students how to deploy simple PoS reasoning. Why? Well, I think it’s
because we don’t actually consider PP that important. Why? Well the most
generous answer is that it is assumed that all th work we do already tacitly endorses
PP’s basic ideals and so worrying PP to death will not get us very much bang
for the buck. Maybe, but being explicit really does make a difference.
Explicitly knowing the big issues what the big issues are really is useful. It
helps you to think of your work at one remove from the all important details
that consume you. And it can even serve, at times, to spur interesting kinds of
more specific syntactic speculation. Lastly, it’s the route towards engaging
with the larger intellectual community that the Athens conference indicated so
many feel detached from.
Personally, I think that the same holds true with DP. I
agree that we are not about to use the existing (non-existing) insights from
the evolution of cognition and language to ground type (iii) thinking. But, I
think that considering the Generative project in DP terms enlarges how we think
about the problem. In fact, much more specifically, it was only with the rise
of the Minimalist Program (MP) in which DP was highlighted and stressed as PP
had been before that the virtues of the unification of the MLGs rose to become a
central kind of research project. If you do not go “beyond” explanatory
adequacy there is no pressing reason for worrying about how our UGs fit in with
other domains of cognition or general features of computation (if indeed they
do). PP shoves the learnability problem in our faces and doing so has led us to
think constructively about Chomsky Universals and how they might be teased out
of our study of Gs. Hopefully, DP will do the same for cleaning up FL/UG: it will,
if thought about regularly, make it vivid to us that unifying the modules and
seeing how these unified systems might relate to other domains of cognition and
computation is something that we should try to tease out of our understanding
of our versions of UG.
I should add that doing this should encourage us to start
forcefully re-engaging with the neuro-cognitive-computational sciences
(something that syntacticians used to do regularly but not so much anymore).
And if we do not do this, IMO, linguistics in general and syntax in particular
will not have a bright future. As I said in Athens, if you want to know about
the half life of a philological style of linguistics, just consider that the
phrase “prospering classics department” is close to being an oxymoron. That way
lies extinction. So we need to reengage with these “folks” (ah my first
Obamaism) and both PP and DP can act as constant reminders of the links that
syntax ought to have with these efforts.
Ok, enough. To conclude: Intellectually, we are in a golden
age of linguistics (though we made need to manage this a bit so as to not
discourage PTS). However, it also appears that politically things are not so
hot. Many of the attendees felt that GG work is under threat of extinction in
many places. There was animated discussion about what could be done about this
and how to better advertise our accomplishments to both the lay and scientific
public. We discussed some of this here, and similar concerns were raised in
Athens. However, it is clear that in some places matters are really pretty
horrid. This is particularly unfortunate given the intellectual vigor of
generative syntax. I will try and say something about this feature in a
following post.
[1]
Jason Merchant made this point forcefully and amusingly in Athens.
[2]
I suspect that some of the hostility that traditional typology shows towards GG
lies in the inchoate appreciation GG has significantly raised the empirical standards for typological
research and those not conversant with these methods are failing empirically.
In other words, traditional typologists rightly fear that they, not their subject matter, is threatened
with obsolescence. To my mind, this is all very positive, but, for obvious
reasons, it is politically terrible. Typologists of a certain stripe might be
literally fighting for their lives, and this naturally leads to a very hostile
attitude towards GG based work.
[3]
Yes, I also see the possibility of a PTSD syndrome (pure theoretical syntax
disorder).
[4]
Continuing a project begun in “Conditions on Transformations” and ending with Barriers.
[5]
Though see Sprouse’s work which provides evidence that wh-islands show the same
super additivity profile as other islands.
[6]
I will likely blog on this again in the near future if all of this sounds kind
of cryptic.
[7]
Though this is changing rapidly, at least at places like the University of
Maryland.
[8]
I’d like to thank Jeff Lidz for showing me this in spades. I sat in on his
terrific class, the kind of class that every syntactician (especially newly
minted syntacticians) should do.
Your point about islands is rather interesting, I think.
ReplyDeleteLooking in from the outside, it strikes me as characteristic of the generative tradition - in contradistinction to most other branches of linguistics - that sufficiently "nice" hypotheses get accepted even when there is obvious prima facie evidence against them, on the assumption that the counterexamples will eventually be explained away. This is not entirely unjustified. If a regular sound correspondence works well for most of the basic vocabulary, we don't reject it because of a few exceptions; rather, we postulate that the exceptions are to be explained as loans, or as examples of an as yet undetected different correspondence, or some such thing - and we check whether such an explanation is feasible given other data.
However, it's also true that one has to be honest about those exceptions up front; until they have been successfully accounted for, it's impossible to justify presenting the hypothesis as a fact. And, unfortunately, a lot of linguists(-to-be) first encounter generative linguistics in the form of nice simple elegant accounts which ignore or gloss over obvious empirical inadequacies (that would certainly describe the MA syntax course I got). Such a presentation can hardly fail to tempt any student with a prior knowledge of linguistic variation to dismiss the whole generative enterprise out of hand. In this respect, a little humility from those engaged in iii) could go a long way towards engaging those involved in i).
Thx for the point. I could not agree more. We should be quite up front about apparent counter-examples for several reasons. First because being explicit means that we can be clear what needs doing next. Second, because often working on the exception to a generalization opens up a whole new field of work (think how ellipsis obviates certain kinds of island effects). Third, if citing a nice MLG and then noting some of the problems became acceptable procedure then we would not have to write 50 page papers the last 35 of which consisting of twists and turns to satisfy reviewers about some data point. Many papers become worse and bury the main idea by not allowing the author to simply acknowledge the problem and move on. So, I could not agree with you more.
DeleteWhere we might part company regards the "humility." In my experience the counter-examples are all too well known and acknowledged outside the intro courses (and in intro courses in every field the story is always cleaned up). One of the nicest features of the Athens gig was how undefensive people were. No problem noting problems. Where the outside world (i.e. non GG world) goes astray, I think, is not recognizing that a pretty good MLG is a very valuable object. Thermodynamics is full of these. In retrospect so were Newton's laws (not perfect it turns out). So too theories like the germ theory of disease which basically says that germs sometimes cause disease. These were very valuable ideas despite not being perfect. The difference in linguistics is that people seem to expect that the relevance of counter-examples to be obvious And even what case is a counter-example is taken as obvious). This is seldom the case if one is testing Chomsky Universals. Why? Because these are not surface visible, they are inferred. So I think that one problem is that even good people confuse Chomsky Universals for Greenberg Universals and the latter ARE evaluable by simply looking at the surface pattern. Thus a "counter-example" is serious as it is a description of a surface pattern.
At any rate, thx of the observation and if Athens is any indication current GGers are perfectly happy to adopt your reasonable suggestion.
Hi Norbert
ReplyDeleteIs there supposed to be a link to some MLGs in paragraph 6, just above reference 3 where it says "At any rate, there is a pretty broad consensus among syntacticians about what these MLGs are (see here for a partial enumeration, in, ahem GB terms)" ?
If there is, I'd appreciate the link!
Thanks :)
Here's the link:
Deletehttp://facultyoflanguage.blogspot.com/2015/03/a-shortish-whig-history-of-gg-part-3.html