The Technical ToilMost MG research in the last 15 years has been concerned with the computational properties of the formalism, and that aspect will remain an important focus in the near future. Not because we're age-challenged canines and thus incapable of learning new tricks, but because there are still some important questions that need to be resolved. I think the next few years will see a lot of progress in two areas: adjunction and movement.
AdjunctionMG implementations of adjunction have been around for over a decade,1 but various people with very distinct research agendas have suddenly found themselves working on adjunction within the last two years. Greg Kobele and Jens Michaelis have been studying the effects of completely unrestricted Late Adjunction on MGs,2 and while it is clear that this increases the power of the formalism, we do not quite know yet what is going on. Meanwhile Meaghan Fowlie is trying to reconcile the optionality and iterability of prototypical adjuncts with the ordering facts that were established by Cinque in his work on syntactic cartography.3 And Tim Hunter and me have our own research projects on how the properties of adjuncts give rise to island effects.4 5
The red thread connecting all these questions is the intuition that adjuncts are not as tightly integrated as arguments, which grants them various perks such as optionality and the possibility to enter the derivation countercyclically, but also limits how they may interact with the rest of the tree and turns them into islands, among other things. Right now we're all formalizing different facets of this intuition, but once we figure out how to combine them into a cohesive whole we will have captured the essence of what makes adjuncts adjuncts and separates them from arguments with respect to Merge and Move.
MovementSpeaking of Move: It has been known for a long time that movement is the locus of power of the MG formalism, it is indispensable for generating languages that aren't context-free. Hence it is not at all surprising that lots of work has been done on it --- it is an extremely well-understood operation. And not just phrasal movement, nay, we also know quite a lot about head movement, affix hopping, and sidewards movement.
So how could there be anything left to study about movement? Well, about a year ago I felt a rather peculiar itch that was in dire need of scratching, and the scratching took the form of publishing a paper.6 In this paper I presented a system for defining new movement types without increasing the expressivity of MGs over strings. The system allows for many kinds of movement besides vanilla upward movement of phrase to a c-commanding position, for instance head movement, affix hopping, sidewards movement, and lowering, i.e. downward movement of a phrase to a c-commanded position. What's really interesting is that every type of phrasal movement definable this way can be replaced by standard upward movement followed by downward movement. From this perspective sidewards movement, for example, is just a particular way of tying upward and downward movement together into one tidy package. Other combinations even yield movement types that allow MGs to emulate completely different formalisms such as Tree Adjoining Grammar.
This raises a long forgotten question from the dead: do we need downward movement in syntax? The received view is that all instances of downward movement can be handled by upward movement, but now we can see that that is not true if the two types of movement are allowed to interact: no matter how much you fiddle with your features and the lexicon, if you only have Merge and upward movement then certain tree structures cannot be generated by MGs even though they can be generated by Tree Adjoining Grammars, and consequently MGs with both upward and downward movement. So now the question about the status of downward movement is whether any of those tree structures are of linguistic interest. In the long run, this should allow us to evaluate formalisms in terms of the empirical adequacy of the movement operations that are needed to emulate them via MGs. And that is exactly what we need to do: once we have a good understanding of our own formalism, we absolutely have to get a better understanding of how alternative proposals relate to it, how they carve up language along different joints, and why.
Oh, just in case you're wondering: yes, feature coding will get some attention of course (e.g. by yours truly), but not as a topic of its own but rather in relation to other concerns such as adjunction. And I am also willing to bet all my precious Faberge eggs that we will see a couple of papers on MGs without the Shortest Move Constraint, a condition that keeps the power of MGs in check but is also considered too restrictive by some people (maybe even the majority of the MG community, I'm increasingly getting the impression that I'm the only one who's willing to stand up for the poor little fellow).
Capacities Beyond Generative CapacityAll the things above sound appreciably neato, but strictly speaking they're just variations of the same old generative capacity shtick that's been the bread and butter of computational linguistics since the fifties. Now admittedly I will never say no to a thick slice of bread with a thin layer of butter (there's no fighting my Central European upbringing), but it's nice to throw bacon and sausage into the mix once in a while. In the case of MGs, that would be learnability and parsing, and both have been frying in the pan for quite some time now.
Fellow FoL commenter Alex Clark and his collaborators are pushing the learnability boundary further and further beyond the context-free languages. An MG characterization of the learnable string languages is closer than ever --- heck, it might even be done already and just needs to be written up. If I weren't a hopelessly jaded 90s kid I would get all giddy with excitement just thinking about the possibilities that will open up for us. For the first time we can seriously study how much of a role learning may play in explaining language universals and how the workload should be distributed between UG and the acquisition algorithm. Sure, the first few hypothesis will come wrapped in seven layers of ifs and buts with some maybes sprinkled on top, but with time we will get better at interpreting the formal learnability claims from a linguistic perspective.
Should you get bored of all the learnability literature, there'll be brand new papers on parsing MGs for you to devour. The top-down parser developed by Ed Stabler has been implemented in five different programming languages and everyone can download the code, play around with it, modify it to their own liking.7 The paper hasn't been around for long and we already have a follow-up that shows why the parser correctly predicts nested dependencies to be harder to parse than crossing dependencies even though the latter are structurally more complex.8 Once again it will take a while to figure out how to interpret the behavior of the parser, and various modifications and extensions will be grafted onto it to account for oddball phenomena like merely local syntactic coherence effects9. But the all-important first step has been taken, everything is now in place waiting for us to play around with it. And play with it we shall.
Empirical Enterprises Emerge!The biggest shift I anticipate in the MG community is the increasing focus on modeling empirical phenomena. I vividly remember a conversation I had with Greg Kobele at MOL 2009 in Bielefeld where he said that we finally had a good understanding of MGs and it was time to focus on empirical applications. Back then I disagreed, mostly because I was still wrestling with some specific formal issues that needed to be solved, e.g. the status of constraints. But once we've got adjunction and movement all figured out, using MGs as a framework for the analysis of empirical phenomena will likely turn out to be the most productive line of research.
Don't get me wrong, the formal work has never been limited to the Care Bear cotton kingdom of pure math devoid of any linguistic impact. There have always been empirical aspects to it. However, producing formal results requires a rich background in computer science, a good dose of general mathematical maturity, the ability to work at a very abstract level, and the Zen mindset that will allow you to fully accept the fact that most of your colleagues will have a hard time understanding why you're doing what you're doing. Needless to say, formal work is not exactly a mainstream activity among linguists, and as a consequence few people have tried working with MGs themselves.
But linguists are not a bunch of buffoons that cannot deal with technical machinery. It's just that their strengths lie in a different area. Rather than reasoning about language by proxy of formalisms, they're more interested in using formalisms to analyze language in the most direct way possible --- in other words, traditional empirical work. So now that we have a really good grasp of how MGs work, plus several tools such as the Stabler parser that can readily be used for empirical inquiries, there is no reason why your average linguist shouldn't take MGs for a spin. Students in particular are very curious about how computational work can help them in their research, and we finally have a nice wholesome package to offer to them that does not require two years of studying just to get started.
Of course I do not expect everybody to suddenly adopt MGs as their favored syntactic framework. I think this is one of the precious few moments where an uncomfortably nerdy analogy is in order: Linux has always been a minority operating system, albeit an important one. A world without Linux would be clearly worse, irrespective of how many people are actually using it on their computers. But Linux has reached a point of maturity where it can be used by anybody with basic computer skills and the advantages of doing so are readily apparent. I became a Linux user pretty much at the same time that I got interested in MGs, about seven years ago. Linux has come a long way since then, and so have MGs. For Linux it has already started to pay off. The thought that the same might soon be true for MGs gives me a warm fuzzy feeling in my tummy. Until the 90s cynicism kicks in. So if you think that my prophecies are utter nonsense, now's your chance to cure me of my delusions in the comment section. I, for one, blame the magic 8-ball.
- Frey, Werner and Hans-Martin Gärtner (2002): On the Treatment of Scrambling and Adjunction in Minimalist Grammars. Proceedings of the Conference on Formal Grammar (FG Trento), 41--52.↩
- Kobele, Gregory M. and Jens Michaelis (2011): Disentangling Notions of Specifier Impenetrability: Late Adjunction, Islands, and Expressive Power. The Mathematics of Language, 126--142.↩
- Fowlie, Meaghan (2013): Order and Optionality: Minimalist Grammars with Adjunction. Proceedings of MOL 2013, to appear.↩
- Hunter, Tim (2012): Deconstructing Merge and Move to Make Room for Adjunction. Syntax, to appear.↩
- I have a paper that will be made available on my website within the next few days, in the mean time you can check out some slides.↩
- Graf, Thomas (2012): Movement Generalized Minimalist Grammars. Proceedings of LACL 2012, 58--73.↩
- Stabler, Edward (2013): Two Models of Minimalist, Incremental Syntactic Analysis. Topics in Cognitive Science 5, 611--633. All the code is on github.↩
- Kobele, Gregory M., Sabrina Gerth and John Hale (2013): Memory Resource Allocation in Top-Down Minimalist Parsing. Formal Grammar 2012/2013, 32--51.↩
- Tabor, Whitney, Bruno Galantucci and Daniel Richardson (2004): Effects of Merely Local Syntactic Coherence on Sentence Processing. Journal of Memory and Language 50, 355--370.↩