This David Adger post speaks for itself.
I’d intended this to be a response to Thomas’s comments but it got too long, and veered off in various directions.
Computational and other levels
Thomas makes the point that there’s too much work at the ‘implementational’ level, rather than at the proper Marrian computational level, and gives examples to do with overt vs covert movement, labelling etc. He makes an argument that all that stuff is known to be formally equivalent, and we essentially shouldn’t be wasting our time doing it. So ditch a lot of the work that goes on in syntax (sob!).
But I don’t think that’s right. Specification at the computational level for syntax is not answered fully by specifying the computational task as solving the problem of providing an infinite set of sound-meaning pairings; it’s solving the issue of why these pairings, and not some other thinkable set. So, almost all of that `implementation’ level work about labels or whatever is actually at the computational level. In fact, I don’t really think there is an algorithmic level for syntax in the classical Marrian sense: the computational level for syntax defines the set of pairings and sure that has a physical realization in terms of brain matter, but there isn’t an algorithm per se. The information in the syntax is accessed by other systems, and that probably is algorithmic in the sense there’s a set by step process to transform information of one sort into another (to phonology, or thinking, or various other mental subsystems), but the syntax itself doesn’t undergo information transforming processes of this sort, it’s a static specification of legitimate structures (or derivations). I think that the fact that this isn’t appreciated sometimes within our field (and almost never beyond it) is actually a pretty big problem, perhaps connected with the hugely process oriented perspective of much cognitive psychology.
Back to the worry about the actual `implementational’ issue to do with Agree vs Move etc. I think that Thomas is right, and that some of it may be misguided, inasmuch as the different approaches under debate may have zero empirical consequences (that is, they don’t answer the question: why this pairing and not some other - derivations/representations is perhaps a paradigm case of this). In such cases the formal equivalence between grammars deploying these different devices is otiose and I agree that it would be useful to accept this for particular cases. But at least some of this ‘implementational’ work can be empirically sensitive: think of David Pesetsky’s arguments for covert phrasal as well as covert feature (=Agree) movement, or mine and Gillian’s work on using Agree vs overt movement to explain why Gaelic wh-phrases don’t reconstruct like English ones do but behave in a ways that’s intermediate between bound pronouns and traces. The point here is that this is work at Marr’s computational level to try to get to what the correct computational characterization of the system is.
Here’s a concrete example. In my old paper on features in minimalism, I suggested that we should not allow feature recursion in the specification of lexical items (unlike HPSG). I still think that’s right, but not allowing it causes a bunch of empirical issues to arise: we can’t deal with tough constructions by just saying that a tough-predicate selects an XP/NP predicate, like you can in HPSG, so the structures that are legitimized (or derivations if you prefer) by such an approach are quite different from those legitimized by HPSG. On the other hand, there are a whole set of non-local selectional analyses that are available in HPSG that just aren’t in a minimalist view restricted in the way I suggested (a good thing). So the specification at the computational level about the richness of feature structure directly impacts on the possible analyses that are available. If you look at that paper, it looks very implementational, in Thomas’s sense, as it’s about whether embedding of feature structures should be specified inside lexical items or outside them in the functional sequence, but the work it’s actually doing is at the computational level and has direct empirical (or at least analytical) consequences. I think the same is true for other apparently ‘implementational’ issues, and that’s why syntacticians spend time arguing about them.
Casting the Net
Another worry about current syntax that’s raised, and this is a new worry to me so it’s very interesting, is that it’s too ‘tight’: That is, that particular proposals are overly specific which is risky, because they’re almost always wrong, and ultimately a waste of energy. We syntacticians spend our time doing things that are just too falsifiable (tell that to Vyv Evans!). Thomas calls this net-syntax, as you try to cast a very particularly shaped net over the phenomena, and hence miss a bunch. There’s something to this, and I agree that sometimes insight can be gained by retracting a bit and proposing weaker generalizations (for example, the debate between Reinhart Style c-command for bound variable anaphora, and the alternative Higginbotham/Safir/Barker style Scope Requirement looks settled, for the moment, in the latter’s favour, and the latter is a much weaker claim). But I think that the worry misses an important point about the to and fro between descriptive/empirical work and theoretical work. You only get to have the ‘that’s weird’ moment when you have a clear set of theoretical assumptions that allow you to build on-the-fly analyses for particular empirical phenomena, but you then need a lot of work on the empirical phenomenon in question before you can figure out what the analysis of that phenomenon is such that you can know whether your computational level principles can account for it. That analytical work methodologically requires you to go down the net-syntax type lines, as you need to come up with restrictive hypotheses about particularities, in order to explore the phenomenon in the first case. So specific encodings are required, at least methodologically to make progress. I don’t disagree that you need to back off from those specific encodings, and not get too enraptured by them, but discovering high level generalisations about phenomena needs them, I think. We can only say true things when we know what the empirical lay of the land is, and the vocabulary we can say those true things in very much depends on a historical to and fro between quite specific implementations until we reach a point where the generalizations are stable. On top of this, during that period, we might actually find that the phenomena don’t fall together in the way we expected (so syntactic anaphor binding, unlike bound variable anaphora, seems to require not scope but structural c-command, at least as far as we can tell at the moment). The difference between syntax and maths, which was the model that Thomas gave, is that we don’t know in syntax where the hell we are going much of the time and what the problems are really going to be, whereas we have a pretty good idea of what the problems are in maths.
Structure and Interpretation
I’ll (almost) end on a (semi-)note of agreement. Thomas asks why we care about structure. I agree with him that structures are not important for the theoretical aspects of syntax, except as what systems generate, and I’m wholly on board with Thomas’s notion of derivational specifications and their potential lexicalizations (in fact, that was sort of the idea behind my 2010 thing on trying to encode variability in single grammars by lexicalising subsequences of functional hierarchies, but doing it via derivations as Thomas has been suggesting is even better). I agree that if you have, for example, a feature system of any kind of complexity, you probably can’t do the real work of testing grammars by hand as the possible number of options just explodes. I see this as an important growth area for syntax: what are the relevant features, what are their interpretations, how do they interact, and my hunch is that we’ll need fairly powerful computational techniques to explore different grammars within the domains defined by different hypotheses about these questions, along the lines Thomas indicates.
So why do we have syntax papers filled with structures? I think the reason is that, as syntacticians, we are really interested in how sign/sound relates to meaning (back to why these pairings), and unless you have a completely directly compositional system like a lexicalized categorial grammar, you need structures to effect this pairing, as interpretation needs structure to create distinctions that it can hook onto. Even if you lexicalize it all, you still have lexical structures that you need a theory of. So although syntactic structures are a function of lexical items and their possible combinations, the structure just has to go somewhere.
But we do need to get more explicit about saying how these structures are interpreted semantically and phonologically. Outside our field, the `recursion-only’ hypothesis (which was never, imo, a hypothesis that was ever proposed or one that anyone in syntax took seriously), has become a caricature that is used to beat our backs (apologies for the mixed metaphor). We need to keep emphasizing the role of the principles of the interpretation of structure by the systems of use. That means we need to talk more to people who are interested in how language is used, which leads me to …
The future’s bright, the future’s pluralistic.
On the issue of whether the future is rosy or not, I actually think it is, but it requires theoretical syntacticians to work with people who don’t automatically share our assumptions and to respect what assumptions those guys bring, and see where compatibilities or rapprochements lie, and where there are real, empirically detectable, differences. Part of the sociological problem Thomas and others have mentioned is insularity and perceived arrogance. My own feeling is that younger syntacticians are not as insular as those of my generation (how depressing – since when was my generation a generation ;-( ), so I’m actually quite sanguine about the future of our field; there’s a lot of stellar work in pure syntax but those same people doing that work are engaging with neuroscientists, ALL people, sociolinguists, computational people etc). But it will require more work on our (i.e. we theoretical syntacticians’) part: talking to non-syntacticians and nonlinguists, overcoming the legacy of past insularity, and engaging in topics that might seem outside of our comfort zones. But there is a huge amount of potential here, not just in the more computational areas that Thomas mentioned, but also in areas that have not had as much input from generative syntax as they could have had: multilingualism, language of ageing, language shift in immigrant populations, etc. There are areas we can really contribute to, and there are many more. I agree with Thomas that we shouldn’t shirk `applied’ research: we should make it our own.