Saturday, April 11, 2015

Athens in May

I’ve been invited at the last minute to a conference in May in Athens (yes I am gloating) on the future of Generative Syntax. The organizers “reached out” to me (no doubt after several more worthy people turned them down) and offered me a deal that I could not refuse. So I took it. What follows is a how the organizers (Artemis Alexiadou, Marcel den Dikken, Winfried Lechner, Terje Lohndal and Peter Svenonius) describe the shindig. What they don’t include here is a personal promise from Terje of “good Greek food,” something that I personally plan to hold him to. So here’s the manifest and some stuff that I sent them in answer to the questions at the end.



Generative Syntax in the Twenty-First Century: The Road Ahead will be a 3-day round-table taking stock of generative syntax and discussing the future of the field. It will take place in Athens, Greece, and feature discussions and a poster session.

We want to incite a high-level discussion of foundational issues with a group practical in size and with a reasonable number of shared background assumptions in hopes of producing a concrete result in the space of three days. Ideally we are aiming for a white paper which will reaffirm the theoretical core of the discipline; that is, outline major assumptions and concepts that we believe are shared by most transformational generative syntacticians today. We think this may be helpful for the field in addressing the three challenges mentioned below.
We also want to identify major outstanding research questions. We want to attempt to identify the major burning questions concerning syntax and its interfaces. This is not in order to determine the research agenda of individual researchers. Rather, we believe that it is part and parcel of taking stock to also think about what lies ahead.
In addition to plenary and group discussions, there will be a poster session, in which in particular young and early-career-stage researchers will be encouraged to participate. We very much want to hear what these are working on, and in addition we think they will make valuable contributions to the plenary discussions. 

Generative syntax has made important contributions to our understanding of language, and with it, the human mind. The field continues to be fecund and vibrant and new discoveries and developments continue apace. However, the rapid growth and development of this still-young field leaves it without a clear and uncontroversial canon, especially in syntax. In principle, there is nothing wrong with this. However, it raises a few challenges, three of which we will briefly outline here.
A major challenge concerns the coherence of the field. Given the large number of different analytic approaches, it has resulted in small groups working on x, y, or z. From a scientific point of view, this is not problematic, but it raises difficulties when it comes to interaction, funding, recruitment and external visibility. We want to discuss ways of improving this situation. We believe that this is especially important given that linguistics and generative syntax are not major fields compared to e.g., psychology or physics. In addition to being problematic in its own right, the proliferation of approaches further exacerbates the problem of teaching and supervision.
Another challenge is related to teaching and supervision. During the time when Government and Binding (GB) was pursued, Liliane Haegeman’s and Andrew Radford’s widely used textbooks were sources that quickly enabled students to read original research papers. Given the proliferation of different assumptions within the Minimalist Program (MP), the situation is different today. Different textbooks build on different assumptions, and they differ significantly when it comes to how much they explain the transition from GB to the MP. This in turn makes it increasingly difficult for students to make the jump from reading textbooks to the original research literature. Our impression is that this was easier two decades ago and we would like to discuss if it is possible to fix this.
A third challenge is related to publications. Because minimalist syntacticians generally cannot rely on a shared core of hypotheses and principles, each paper has to build its case from the ground up. This has already resulted in extremely long papers, much longer than in most other sciences. It is not clear that this is benefitting the field.

The project originated from a discussion concerning ways in which a conference could be organized in Greece in order to signal support for the linguistics community in the southern Balkans, a group of people who has severe difficulties attending conferences and partake in discussions due to the current economic situation; money for research-related activities is all but gone and salary cuts have been severe. So a guiding idea was to bring a conference to Greece. Along with the potential benefits the event might have on a local level, another motive in the background was the ongoing pursuit of strategies for getting EU-level research funding for collaborative projects with Greek linguists.
The organizers also included some questions for the participants to address. Here they are:

1.     Strengths and Weaknesses

A.     What has been the main strengths of generative syntactic research, with particular emphasis on the early 21st century, and what do you think is worng with the field of Generative syntax today?
B.     How do you think the field could/should go about addressing the current problems?

2.     Central unresolved theoretical issues

A.     What are the major open questions in the field of generative syntax today?
B.     What is or ought (not) to be in the field’s theoretical core?

3.     Syntax in relation to other fields of linguistic inquiry

A.     What are the main success stories and bottlenecks in the interaction between syntax and other core-theoretical sub-disciplines (semantics, phonology, morphology)?
B.     What are the main success stories and bottlenecks in the interaction between syntax and the experimental sub-disciplines (language acquisition, sentence processing and neurolingusitics), and how can syntax be more useful to those?

4.     The road ahead

A.     What do you see as the biggest challenges for generative syntactic research in the coming years/decades?
B.     In which direction(s) would you like to see the field proceed, and where would you like the filed to be in ten or twenty years time?
These questions are clearly on the same wave length as those solicited Hubert Haider and Hnek van Riemsdijk for their Hilbert Porject. They are excellent questions. I am publishing the questions here to solicit some suggestions from FoL readers. I am told that crowd sourcing and the wisdom of crowds is all the rage. To help matters along I finish this with some answers that I sent along to the organizers as per instruction.
 1. Strengths and Weaknesses:

Intellectually, contemporary Generative Grammar (GG) is unbelievably healthy. Due to the work in the mid to late 20th century, GG has an impressive body of doctrine and mid level laws that are empirically well grounded. We identify these “laws” as “effects,” e.g. binding effects, island effects, control effects S/WCO effects, etc. This is an enviable accomplishment and one that should not ever be forgotten or diminished. Don’t misunderstand: these “laws” are not prefect, but they are very very impressive. 

In addition, there continues to be excellent typological work refining these laws and extending their reach.  The intensive cross-linguistic study of various languages and their Grammars (G) that began in earnest in the mid 1980s continues even stronger today.  I doubt that there has ever been as rich and varied a group of grammatical descriptions as exists today.  This is really subtle and excellent work and it has immensely enriched our understanding about the variety of ways that G structure gets realized.

So, as far as the descriptive/typological enterprise goes (the one that explores the variety of Gs), things are better than ever. However, I believe that GG has decided, consciously or not, to narrow its vision and as a result linguistic theory has gone into abeyance. What do I mean by this?

It helps to start by asking what the subject matter of linguistics is. There are two related enterprises; descriptions of native speaker’s particular Gs and descriptions of human capacity to acquire Gs. The latter aims, in effect, to describe FL. There have been three strategies for pursuing this inquiry in GG history:

1.     Inferring properties of FL from G’s up
2.     Inferring properties of FL via Plato’s Problem (PP)
3.     Inferring Properties of FL via Darwin’s Problem (DP)

Of these the only currently widely pursued route to FL is via 1, the typological-comparative strategy. Both 2 and 3 are currently, IMO, very weak. Indeed, PP considerations have largely disappeared from the research agenda of syntacticians (we have offloaded some of this to students of real time acquisition, but this is a slightly different enterprise) and DP is at best boilerplate.

Moreover, I don’t believe that using route-1 is sufficient to get a decent account of FL, as there is an inherent limitation to scaling up from Gs to FL. The problem is akin to confusing Chomsky and Greenberg universals. A design feature of FL need not leave overt footprints in every G (e.g. island effects will be absent in Gs without movement) so the idea that one can determine the basic properties of FL by taking the intersection of features present in every G likely is a failing strategy.

This does not mean that route-1 is unimportant for investigating FL (i.e. UG). But it does imply that it won’t focus on questions and properties of FL that strategy 2 and 3 will. Thus it is, at best, IMO, incomplete.

Let me illustrate what I mean. As Chomsky has rightly emphasized, there is always a tension between descriptive and explanatory adequacy. The wider the range of descriptive tools at our disposal, the easier it is to make distinctions among various phenomena and the easier it is to cover divergent data points.  The secondary place of theory is evident in the reluctance to ever dispose of a G mechanism. Let me give you an example. Early Minimalism distinguished between interpretable and uninterpretable features. For various conceptual reasons it was argued that it would be better to to substitute valued and unvalued for interpretable and uninterpretable (I never found these areguments that convincing, but that it not the relevant point here). Ok, so in place of +/-I we get +/-valued. As it happened, the substitution of valued for interpretable had a very short half life and now many (most?) GGers take Gs to exploit BOTH notions. So in place of a two-way distinction, we now have a four-way distinction. Not surprisingly, having two extra ways of cutting up the pie allows for greater empirical suppleness. Have the theoretical concerns of complicating things this way been addressed? Not to my knowledge despite this theoretical inflation’s ramifications both for PP and DP.

This is but one example. I think that this is inflationary two step is very common. A suggests principle P. B suggests replacing P with P’. C comes along and argues that both P and P’ are required. Rinse and repeat. There is no doubt that such conceptual inflation can have payoffs. However, there is nary a word about the theoretical costs of doing this. Or, to put this more pointedly: the empirical costs of remaining conceptually modest are always tallied while the explanatory costs of inflating the basic theory are rarely considered. This tilts inquiry away from theoretical work.

There are other examples. My favorite hobbyhorse regards the huge overlap between AGREE as a Probe Goal relation and I-merge. But I will save this for the next question.

Let me end with a general point, a practical suggestion and an exhortation.

Nobody denies the importance of category-1 work like that above. However, category 2, and 3 are also important. We should insist that PP and DP questions get asked for all proposals. When an analysis is delivered we should ask how the relevant grammatical operations could be acquired. We should ask concerning the operations how they bear on DP. We should not treat G descriptions as ends in themselves but as way stations to the deeper question of how FL is organized. And even if we cannot deliver answers, should insist that these concerns not be shoved aside and forgotten. In the best of all possible worlds, we should even be ready to live with some recalcitrant data points rather than expand the theoretical apparatus in PP or DP unattractive ways.

The suggestion: I think we need to retrain aspiring GGers to understand PP and DP. If my own experience is any indication, many GGers have problems deploying a PoS argument. Thus, the practical consequences of PP and DP reasoning for the practicing syntactician are far from clear. I suggest making this be part of any discussion. We may decide to put it aside, but foregrounding these concerns may have the beneficial effect of advancing respect for explanatory adequacy. It should also provide theoretical grounds for rejecting formal inflation. This would in itself be a very good thing.

The exhortation: I began by noting the tremendous intellectual accomplishments of GG. To repeat: GG has an impressive set of results. Moreover, these results should serve as starting points for furthering GG’s investigation of the fine structure of the Faculty of Language (FL). We now have results we can build on theoretically. So, in the best sense, there has never been a better time for doing good theoretical work that addresses PP and DP concerns, the problems that got many of us (e.g. me) into GG to begin with.

2. Central unresolved theoretical issues

There are a whole bunch. But in the minimalist setting I think that three immediate ones that stand out in my mind.

First, what to do about islands and the “antecedent government” parts of the ECP?  Island theory (aka Subjacency and Barriers) used to be the jewel in the crown of linguistic theory. Within minimalism, islands are theoretically poorly understood.[1] The deficiency is four fold:

1.     Phases, as currently understood, do not comfortably accommodate islands. Many have noted this, but it doesn’t make it less true. There is a straightforward way of translating island effects via subjacency theory into phase terms (and the translation is not any worse than the earlier subjacency account) but it is not better either. This means that we have gained no insight into the structure of islands that removes their linguistic particularity (thereby mamking them DP problems). Here are some questions: are Ds phase heads? If so, how do they differ from v and C? Why can’t they have edges that can be used for escaping islands? Why are C, v and D phase heads? These are all well-known questions to which we have offered no very enlightening answers.
2.     Though it is possible to translate subjacency effects into phase terms, it is much harder to do the same for ECP effects. First there is the problem of what ECP effects are effects of. The ECP was a trace licensing principle. Traces within GB were considered in need of “help” due to their lexical anemia. But, there are no traces in minimalist theory, only copies/occurences. Why then are some copies/occurences invidiously distinguished (e.g. why are arguments being more leniently treated than adjuncts?) ?  Moreover, there is some evidence (mainly from Lasnik and his students) that whereas argument island effects can be obviated via ellipsis, this seems less true for adjuncts. Why? What’s it mean to say that “LF” well formedness conditions apply to adjuncts but not arguments? Moreover, why are the domains relevant to the ECP so like those relevant for islands? It seems like these should really be treated in the very same way, but it appears that they are not. In other words, ECP effects seem like they should reduce to subjacency effects but they appear not to. What’s up?
3.     Why are island effects restricted to movement dependencies? Why doesn’t binding obey islands? Are these really PF effects and if so in what way are they such? Do islands only apply to movement chains that terminate in a phonetically null deleted copy and if so why?  These are all really the same question. We have had some insights into these questions from the ellipsis literature, but there is lots that we still don’t really understand. But what would be really nice to know is what exactly it is about a chain’s terminating in a gap that matters to islandhood.
4.     As Luigi Rizzi has pointed out, there appears to be a kind of redundancy between phases and minimality. Do we require both notions? Can we reduce one restriction to the other? How much is long movement via intermediate positions feature driven? And if it is, can we use these features to explain movement limitations via something like relativized minimality?

Second, what to do about AGREE and I-merge. I mentioned this before, but it strikes me that the Probe-goal technology and AGREE based Gs that also include I-merge are massively redundant. How so? Well the “distances” spanned by movement (A-movement in particular) are often the same as those that show up in agreement configurations. This is obvious for the many cases of agreement were the surface configuration is spec-head. But it is even true for the cases of inverse agreement. It is seldom the case that the span of an inverse agreement relation differs from that found in overt movement. This is why, I believe, Movement is often analyzed as AGREE+EPP.

I find this very problematic. It seems to me to go against the great (late) minimalist idea that movement and structure building are flip sides of the same coin. It undermines the idea that movement is an expected design feature of Gs. Why? Because, if movement is parasitic on AGREE then there is little reason to expect to see it. There already exists a way of establishing non local dependencies between expressions in a phrase marker, AGREE, and it can apply independent of I-merge so why does I-merge ever apply. Our current answer is the EPP. In other words, it applies because it does apply. True, but not enlightening. Let me put this another way: Early minimalism took movement to be a design flaw. Later minimalism took it to be an inevitable by-product of the simplest version of Merge. Current theory takes it to be a design flaw of the perfect theory? Hmm.

What’s the alternative? Well, that there is no long distance AGREE operation. All non-local dependencies are mediated via I-merge. This is the “old” minimalist idea that feature checking is in Spec-head configurations. Agree was thus a rather local head to head relation that takes place within a restricted domain. This idea was, IMO, hastily abandoned at the expense of the minimalist conceit that aimed to reduce the theoretical machinery so as to make it more DP tractable. However, given a copy theory of movement and a single cycle theory it is possible to mimic long distance agree with movement and deletion of the higher copy. Given that this is probably required anyhow for other cases, why introduce a novel operation, AGREE, and a novel dependency (probe-goal) as primitive features of Gs?

Responding to these kinds of Darwin-Problem arguments (and eliminating AGREE as a primitive operation) has serious theoretical consequences, which may indicate that this is not a tenable move. For example, here are two:
1.     We will need headedness in the syntax (note just at the CI interface for interpretation as some currently assume (e.g. Chomsky)) to make it work. Thus labeling will be a syntactic operation as it is required to allow heads to locally converse. This is not currently the rage, at least not if we go by Chomsky’s latest “Problems of Projection” papers on the issue (which, I should confess, I am not moved by at all, but that is a topic for another discussion). To be clear: moving in this direction might complicate the minimalist conceit of treating Merge as the magic sauce that launched FL. If labeling is a basic syntactic operation, then Merge alone does not suffice. I believe that there are ways of navigating these theoretical shoals, but adding labeling is not theoretically innocuous.
2.     Anaphoric dependencies are not products of feature agreement. I think that so understanding anaphora is a bad idea on minimalist grounds. Nonetheless, if AGREE does not exist as a grammatical operation then treating binding and anaphora in terms of AGREE is simply a non-starter. As many of you know, this is perhaps the most popular way of analyzing anaphoric dependence.

In sum, I think that the theoretical cost of allowing AGREE as an operation additional to I-merge has theoretical costs that we have not carefully considered, and we should.

There are many more hot questions (e.g. should we expect a minimalist FL to be modular?, What exactly is a syntactic feature and why do they exist?, What role if any does morphology play in the syntax (e.g. does it drive operations or is it a relatively useless by-product?)?) but I am happy to submit the above two as my prime candidates for current consideration.

3.     Synatx and..

            There is lots of low hanging fruit in the combination of syntax with work in psycho-linguistics. There is even an interesting high level question relating them: how transparent is the relation between Gs and the systems that put these Gs to use in real time processes. If this is correct, then it provides another window into the basic primitives of FL. This has been very interesting work done on this question. Syntacticians can do quite a bit to help it along. I know this from personal experience given what we do at UMD. For those interested in this, I have reviewed a bunch of this stuff on Faculty of Language so look there. Colin Phillips, Jeff Lidz, Paul Pietroski have done lots of interesting work showing how to combine good formal linguistics with interesting psycho-linguistics. I even think that there are interesting argument forms out there that may be of interest to syntacticians concerning, for example, the right notions of locality or the right format for the semantics of determiners).

What can syntacticians do to help this along? Well, most importantly, learn how to talk to psycho-types. To do most of the work they are doing it doesn’t really matter which vintage syntactic theory they exploit. If you want to study the processing or acquisition of binding, for example, GB binding theory is almost always sufficient. The newest fangled grammatical devices are not always the most helpful. So, don’t feed your psycho-friends the latest minimalist wisdom when a good easy to use GB analyses is all that they really need.

Second, be ready to think of things from the psycho point of view. Here’s what I mean. IMO, results in syntax are far better grounded than almost any result in psycho. As a result, syntacticians reasonably find it implausible that results from psycho might have consequences for syntax. However, taking transparency seriously lends itself to the unsettling conclusion that results using psycho techniques might provide arguments for rearranging our syntactic theories. We should be ready to consider this a real option and even help develop such arguments. IMO, they are not quite ready for prime time, but they are getting very interesting and, in some areas (e.g. on determiner meaning) have provided strong reasons for preferring some representations over others.

4.     I’ve answered 4 in various ways above. So I won’t repeat myself.

So that’s it. Fun in Athens in May. Can’t wait.

[1] Most curious to my mind is that successive cyclicity, which was in GB very tightly tied to islands, is now largely divorced from such conscerns.


  1. Kalo taksidhi. Envy Envy Envy ....

  2. This comment has been removed by the author.

    1. This actually looks worth flying from Bangkok for.

  3. With due respect to all of the fine individuals who are going, it's hard not to be struck by the incongruity between the title of the conference and the range of featured voices. Among the long list of invited speakers there's only one person who is both < 50 and graduated in the 21st century. I'm sure everybody will have sage things to offer about the future, but the distribution does not exude confidence in the future. Would a similar meeting about syntax held in the 70s or 80s have had a similar distribution?

  4. Colin: In say the late 70s, you wouldn't have had many people who were doing GG who had their doctorates 15 years ago :) Most ppl on the list are slightly above 50, though and many received their doctorates in the 90s or later, it ain't that bad :) But yeah, the really younger people seem to all be presenting posters :(

  5. Colin: Thanks for this observation. Our thought was that we could attract interested younger people with our call for posters. We considered selecting the entire program through a call for papers but decided that it would be too much like other events, e.g. GLOW. Our list of invitees was arrived at by consensus among the organizers after a long back-and-forth during which a very long list of suggestions was pared down to one of a manageable length. The process has the usual shortcomings and obviously many excellent people couldn't be included. In general, it was easier for us to agree on people with a clear track record and long experience than on those without, given the kind of broad perspective we were hoping to achieve for the general discussion.

    1. Hey, if you'd like more young people, the fine & <30 linguists collecting interviews for the linguistics podcast 'the polemical brain' would love to come and cover this event for a professional & public English-speaking audience. Unfortunately, we have no access to funding to come over in this capacity. Perhaps someone might suggest a source of funding?

  6. Utpal: if the 90s feels recent (... as it does to me), then you know that you're long in the tooth. But beyond being facetious, I think that conveying a sense of recent progress is important to the health of a field. Norbert raises many interesting questions in his post, but is there anything in there that couldn't have been asked 10 years ago? (Would the same be true in semantics? I don't know.) In days of yore, one of the things that attracted young talent to the field was the sense that they could make big contributions, and they could make them soon. In order to continue to attract people, it helps if they have a similar impression. And that's made easier if we can point to breakthroughs in the past 10 years, and that's even easier to recognize if those discoveries were made by people who weren't in the field 10 years ago. In some fields there are barriers to this, because of the huge lab infrastructure that it takes years to build. But linguistics has benefited from a culture that believes that one can make a big impact very young.

    1. I agree that getting younger practitioners would have been nice, but as I am just a (late) invitee I leave the decision procedure to the organizers (I see Peter made a comment above).

      Let me address your second point: is there anything here that couldn't have been asked 10 years ago? Well, as a matter of fact I think there is. Take the questions regarding binding and AGREE. We now have several different analyses of binding for example that are quite different from the GB binding theory. The question of the comparative minimalist utility of these wrt Darwin's Problem could not have been asked a decade ago as these analyses were not well developed then. So too with my obsession about headedness. One important current view, due in part to Chomsky, is that labels are phrasal titivations added at Spell Out for the benefit of interface interpretation. If a consequence of dumping AGREE is that this needs revision then this is also not a question that could have been addressed before as the before account had very different views of labels and their role in the syntax and semantic interface.

      The question that could have been asked more than a decade ago has to do with islands. I know you have a fondness for island phenomena so I am not surprised this one may have caught your attention. However, the question has, IMO, been ignored and the direction of phase theory has not been to embrace it. This is especially true if one extends one's thinking to antecedent government effects. So could it have been asked earlier? Yes. Have we made any progress? Well, in part. The whole ellipsis industry has shed interesting light on how to conceive of islands. A more updated version of my worry might be what to do with ECP effects if indeed islands are effectively PF phenomena. Again, this question could not have been posed before the Merchant Lasnik work gained acceptance. This is about a decade old, but it has been a hotbed of activity and its consequences are now being carefully mulled over.

      I should add, that one of my desires is to see OLD questions, ones that were more central, revived. Moreover, I think that the "young" may have forgotten these questions, so a dose of the "old" may be useful here. I should also add, that in my admittedly limited experience, the kinds of questions the organizers are asking to be addressed have dropped out of a lot of contemporary syntactic discussion. I know that it has been incredibly difficult to get any linguists to discuss the kinds of questions we have been asked to address (see the wheezing Hilbert Questions effort). So the organizers are to be commended for trying to get such a discussion rolling. Hopefully, the old farts will be put in their place by the poster giving young Turks.

  7. Colin: I think that at least in theoretical syntax, it is hard to evaluate something accurately as a breakthrough before it has been around for a while, and people have had a chance to challenge it from different angles, or to explore its consequences for the other parts of the theory that it impacts.

  8. Peter: you raise a couple of interesting points, which makes me wonder whether things have changed over the years. You say that it takes a while before one can tell whether a breakthrough has occurred. But my impression is that this wasn’t the case in the past, where there was rapid recognition of important and impactful results. We can easily think of cases where there were important results that were taken up by the community almost right away. Also, you imply that to find the “broad perspective” that you want for the meeting you needed to gravitate towards older folk, and Norbert worries that “the young may have forgotten these questions”, in talking about some important and long-standing questions. Assuming that you’re both right, this raises concerns about the health of the field.

    To be clear: I have no beef with any of the fine people who will be gathering in Athens, and I think that we can all hope to see live(ly) blogging on FoL from the meeting. Norbert will offer tutorials, I presume. The concern is about signs of the vitality of a field, which can also help to attract talent, drive new positions, bring sustained funding, interest from outside, etc. In the sprit of Norbert’s solicitation, I’ll offer 3 suggestions for the "road ahead".

    1. Recognizing breakthroughs. There needs to be a good sense of big, burning questions, and of what it might look like to make progress on them. Peter and Norbert might be right that it takes years to tell, or that it’s hard to get people to engage with those types of questions. If so, then we have a problem. If people don’t know the big questions, or don’t recognize breakthroughs until many years later, then that’s a pretty good incentive to work on something else. You can do better at getting a job and getting tenure by going after smaller problems that yield respected publications on a manageable time scale.

    2. Beyond a cottage industry. Back in the day, when the field was younger and smaller, it was not uncommon for people to make a big splash early in their career with a relatively small scale piece of work, often unpublished. It was like the Jobs and Wozniak era, where a couple of kids could take the world by storm with something they built in the garage. But while the tech industry has moved on, syntax is still largely a cottage/garage industry. I’m sure there’s still a lot of room for that, but perhaps breakthroughs on some of the bigger questions [see (1)] will require larger scale projects involving concerted collaborations. For example, Norbert points out the limited progress on questions involving Plato’s Problem (PP), and I agree with him. I suspect that making real progress on that front will require collaborations among people with quite different expertise (different language groups, different methods, different types of linguistic analysis, etc.). In fields like neurobiology and physics it is taken for granted that some things have got to be done by teams, but syntax is mostly not configured that way. For instance, there are some well regarded graduate programs that still explicitly discourage students and faculty from collaborating.

    3. Immigration reform. Semantics and phonology have more porous border control than does syntax. In contemporary semantics and phonology people are pursuing old chestnuts and new questions using established and new-fangled approaches, and that work is still considered to be semantics or phonology. For the most part, this is serving them well. Syntax has been less welcoming. If you become too wayward, then you risk losing your passport. A more enlightened immigration policy might help with (1) or (2).

    1. Colin: You’re making lots of very good points. You should come to the conference! A few reactions:

      You note that in the past “there were important results that were taken up by the community almost right away” and wonder whether “the Jobs and Wozniak era” of linguistics is over. I would say it’s relative: Yes, the field is different now: it has an impressive canon of results, as Norbert says, the theory underpinning it is more mature, and it is bigger with more specialization (requiring more collaboration for far-reaching results); all of this means that the chances of a bright young turk’s new idea turning everything on its head overnight are reduced (and it’s not clear that that is bad news).

      On the other hand, the field is still small and young compared to others, and the possibilities for a young person to make a significant impact are still vastly greater in our field than in those others. We all know countless recent cases where a PhD dissertation or a single-authored paper written by a graduate student is widely cited and influential. I believe that the situation is at least quantitively different in other fields.

      On a different note, and just for the record, I don’t think I share your sense that syntax is more doctrinaire than phonology or semantics. Or maybe I just don’t realize that my passport has been revoked!

    2. Thanks Peter. Re: “immigration reform”. My concern was less about being “doctrinaire”, and more about how the borders of the field are defined. People have a relatively well-defined notion of what syntactic research looks like, and as one moves further from that prototype there is (inevitably) a point where they will say, “no, that no longer counts as doing syntax”. My hunch is that one reaches that boundary sooner in syntax than in modern phonology or semantics (though casual discussions today make me less certain). We’ll happily define syntax as (roughly) the study of that part of the mental language faculty that deals with sentence structure, and in studying that mental capacity there are lots of things that one can ask, including the foundational concerns that Norbert lays out above (Plato’s Problem, etc.). But in practice, the work stays fairly close to the established prototype. If you stray from the standard Marr level, or take Plato’s Problem a little too seriously, or spend a lot of time using different tools, then you’re no longer doing syntax, you’ve moved into another field. I’m loathe to use a personal example, but in my own case, I feel like I’ve been interested in similar questions about the mental capacity for syntax for a long time, and for my first job I was even hired as a syntactician. But I think it’s fairly clear that I lost my syntactic passport many years ago (and I also stopped self-classifying as such). My hunch is that in phonology one can stray a little further from the prototype without risking citizenship, and in semantics there’s more of a mix of views on border control.

      In the case of Plato’s Problem — I focus on this one because it’s one that has broad acceptance as a problem that syntacticians are accountable to — I think that the main progress in recent years has involved learning just how hard it is. Perhaps much harder than we would have imagined 40 years ago. We now have many advantages that mean that it should be possible to approach this rather more seriously than was possible a couple of decades ago. We know much more about the richness of the endstate, we can describe and analyze the primary linguistic data much better, we can make much more confident claims about what learners can and can’t extract from the PLD. This is wonderful. But it would be optimistic to call this a crowded research area currently. And if somebody does roll up their sleeves and start to dig into all of these pieces of the problem, then it no longer looks like “doing syntax”. Meanwhile, in other departments (e.g., psychology) there’s not much of a home for this kind of work either, because if you don’t believe in the richness of the endstate, then there’s no problem to answer to. Net result: Plato’s Problem is oddly under-explored these days. That’s unfortunate, given that it’s one of the better motivations for a restrictive theory of syntax.

      [To be clear, in case there’s any misunderstanding. Nothing that I say here should imply anything negative about the work that is done in syntax, nor do I doubt the legions of interesting findings in that area. And I don’t much care which passport I carry myself. But I do worry about the degree to which the field is defined by its workflow rather than its over-arching questions.]

  9. I've decided to schlep over to Europe for this. One thing I'm wondering about is the format. There are a lot of invited speakers. That's great, of course, but I wonder how easy it will be to actually run a ' round-table' in such a way that the younger researchers have a chance to make a significant contribution?

    1. I have been organizing at least one conference a year for about twenty years, and I know that organizing a conversation with a lot of interlocutors is a challenge. We are scheduling lots of discussion time and granting commensurately less floor time to individuals than in a traditional conference, so keeping the discussions on topic will be especially demanding. The SWOT-style questions that Norbert posted are part of an attempt to manage the discussion periods. The invitees’ advance responses to those questions will give us an idea of who has a lot to say about what, and will allow us to organize the event into some thematic sessions. Hopefully this will contribute to keeping the various discussions on track, which will hopefully make it easier for everybody who has something to say to say it.

    2. Speaking as one of those youngsters in the field...While I won't be able to attend, I am very much interested in getting a glimpse of the content of this event, especially with as ambitious a goal as "reaffirming the theoretical core of the discipline". Any chance there will be some audio/video documentation of the proceedings, beyond the final white paper?

    3. I love it when people stream or post video of lectures, but in my experience the question period following it is rarely something you can follow, given the logistical problems of getting the microphones close enough to whoever's talking and aiming and focusing the camera quickly enough. Since this event is supposed to be a series of big discussions, not centered around presentations, I don't know how to solve those logistical problems and we currently have no way of videotaping or streaming the event.

      On the other hand, at least my colleague Gillian Ramchand is planning to live-blog the event, and I'm sure Norbert will be having a thing or two to say as well!

  10. I wonder whether other fields spend as much time in introspection and self-worry as syntax does. I think not, and I'm not sure it's warranted or even healthy. I think the field has an excellent sense of "a good sense of big, burning questions, and of what it might look like to make progress on them". If the field is not generating brand-new big burning questions on a daily basis, but working on subquestions that we think may contribute to serious progess, that's a sign of seriousness, not a cause for worry. If you build for them, the breakthroughs will come. And they do.

    I at least, find I learn something new and exciting about syntax more or less every week (from students, from papers and from talks), and can think of various breakthroughs and discoveries of recent years that could be identified as such almost immediately. The slides from my 2013 LSA plenary have some examples — I have in mind especially the list of discoveries (p.71) that in a better world would have been hailed in the popular press, as well as the newer material with which I began that talk. I could add to that list things like the Final-over-Final Constraint (FOFC), or, in a more general vein, the logic for syntactic explanation that Colin and Norbert's new Maryland colleague Omer Preminger championed in his recent MIT dissertation and LI monograph.

    It could be that we will improve the status or health of our subdiscipline by looking to other successful fields that we envy for one reason or another — and modeling ourselves on them, as Colin seems to suggest. But it could also turn out that the result would be a cargo-cult facsimile of some other field that does us no good at all — because our field is at a different stage or needs to tackle different kinds of questions. I have neither the wisdom nor a sufficiently accurate crystal ball to judge this, and I'm not sure anyone else does either. I think the best collective strategy of the field is for us all to follow our hunches and do what we think is best — and to avoid overexercising our normative instincts with respect to our colleagues. My hope is that there are — and will continue to be — enough smart syntacticians trying enough different approaches to inquiry that the breakthroughs happen, whatever research infrastructure they may require. We could use more syntacticians. But I see no slowdown in the number of brilliant, energetic young people who want to contribute to our subfield, so on this matter, at least, I'm an optimist.

  11. Thanks to the various contributors above, we will definitely bring this input along to the round-table itself! Just a couple of brief additional comments.

    Unlike David P, I am not sure that we have an _excellent_ sense of "[…] a good sense of big, burning questions, and of what it might look like to make progress on them". First, it would be a bad sign if we came up with big, burning questions every day, as David P also says. Second, what exactly identifies something as a big, burning question? Opinions and taste differ, but some factors that come to mind: 1) it is a question whose answer goes beyond the empirical domain that is being investigated. So the nature of government, binding, agree, phases, movement are big questions because they say something general about the architecture of I-language, not just the particular set of facts that one is looking at. 2) it is a question where links to other areas of linguistics and potentially cognitive science more generally can be clearly identified. Here I am thinking of questions relating to acquisition, to processing, to semantics, to typology, etc. 3) It is a question which carries a certain amount of risk. That is, the question has to be daring enough in the sense that the answer is not easily available, but it cannot be too daring such that negative results won’t yield results that are relevant or important. Otherwise it won’t be funded or grad students wouldn’t want to work on it. 4) It is a question that cannot be investigated by a single scholar. That could maybe done previously within generative syntax, but I would claim not really anymore. Our knowledge is too comprehensive (which is a good thing) and the complexity is such that one needs to master a wide variety of skills in order to be able to investigate phenomenon x adequately.

    In general, a big question is probably one where the potential for big impact is present. When writing grant applications, we have to think about the factors mentioned above all the time, and I tend to think that big questions are those that go in grants and sometimes in dissertations. That doesn't mean that all dissertations actually deal with a big issue, but most students do try to relate their study or studies to a question that their dissertation has made some progress towards illuminating/solving.

    Breakthroughs are different from big questions. Breakthroughs concerning one question can come when actually trying to solve another question, be it small or big. I totally agree that we have made a lot of very important discoveries over the past 50 years and that we also have made quite a few breakthroughs.

    There's no doubt that the field has an excellent sense of good questions to be asked. My feeling, which is undoubtedly personal and a matter of taste, is that in the 1960s, 1970s, 1980s and the early 1990s, it was clearER to the field what the _big_ questions were. But maybe my impression from back then is wrong (granted, I wasn't around).

    Asking and trying to solve the big and daring questions typically require teams, not individual scholars. That's a good reason why, at least in Europe, grants usually come with a number of positions. One individual alone cannot solve the questions and they very often require collaboration across sub-disciplines and disciplines, the way Colin described. I think syntax is slowly changing, but I think we would benefit from doing more joint work and thinking about our research questions in ways that explicitly require teamwork. Syntacticians would benefit from that, and hey, it's also much more fun!

  12. This comment has been removed by the author.