Well, he did it! Three excellent and provocative talks on syntactic theory. Here is the third set of slides. In this talk, David went after sidewards movement (SWM), a favorite idea of mine and, though you might not believe this, I sat quite demurely through the whole talk and basically agreed with much of the what David had to say. I, not surprisingly, did not buy the conclusion, but I did buy the way that he set up the problem and the way that he approached a solution given his empirical judgments about the empirical viability of SWM. How so?
David makes two important points (i.e. points that I completely agree with).
First, that contrary to what is sometimes said, the current definition(s) of Merge does not by itself rule out SWM as an instance of Merge. It is sometimes claimed that whereas Merge when applied to E and I instances is a binary operation, when extended to SWM becomes a 3-place operation. What is correct is that one can define SWM to be 3-place and thereby invidiously distinguish it from the other applications, BUT, this is not a definition forced by any notions of conceptual simplicity or computational elegance. It is just something one can do if one wants to rule out SWM.
Moreover, as David seemed to concede, there is no really non ad-hoc way of ruling SWM out by simplifying the definition of Merge. All require further (ahem) refinements (what I would dub, extrinsic machinery designed to rule out a perfectly well defined option). As he noted in the talk, and I have been at pains to emphasize in conversation over the years, SWM is what you get when you leave the simple definition alone. To repeat. It is possible to merge two unconnected expressions together (E-merge) and to merge a subpart of one expression to that expression (I-merge). SO why is it not also possible to merge the subpart of one expression to another that it is not a subpart of (SWM). In other words, you can "look inside" a constituent and you can have multiple constituents in a "workspace" and this is all you need to allow SWM unless you make things more complicated. And that's why I have always thought that SWM is a natural consequence of a very simple definition of Merge and that preventing it requires either complicating the definition or arguing that more goes into Merge than the simple operation.
Moreover, how to complicate matters is not a mystery. The idea that I-merge requires AGREE (i.e. move = Agree + EPP) suffices to block SWM as ex-ante the target does not c-command the mover. Needless to say, someones modus ponens can be someone else's modus tolens and one might conclude from this that AGREE is a suspect operation (e.g. someone like moi) that should be forcefully thrown out of our minimalist Eden. But, if SWM proves to be empirically unpalatable, well this is one way to get rid of it.
Side note: of course if E-merge and I-merge are actually the very same operation then why I-merge needs to be licensed by AGREE but E-merge does not (indeed cannot) have to be becomes a bit of a conceptual mystery. And please don't tell me about having to identify the inputs to Merge as if finding an expression inside a constituent is particularly computationally demanding (indeed, more demanding than finding an element in the lexicon or the numeration).
Second, David thinks that it is important to start thinking about how operations like Merge are computationally realized algorithmically. In fact, his talk presents an algorithm that makes SWM unavailable. In other words, Merge the rule allows it but the computational implementation of Merge inside an architecture with certain kinds of memory restrictions prevents it. So why not SWM on this view? It's the structure of linguistic memory stupid!
I liked the ambitions behind this a lot. I've argued before that this line of thinking is what MP should be endorsing (e.g. see here and here and search from SMT on the site for others). It seems to me that David is on board with this now. In particular, that the SMT should concern itself not merely with restrictions imposed on FL by the interfaces AP and CI but also by the kinds of memory structures that we think are necessary to use Gs generated by FL. We know a little about these things now and it is worth speculating as to what this would mean for FL. David's talk can be viewed as an exercise in this kind of thinking. Great.
So, I loved the lecture, but did not buy the conclusion. Let me note why very briefly.
One thing to look for in a new research program are things that are different or novel from the perspective of older research programs. SWM should it exist is a novel kind of operation that we would not have though reasonable within GB, say. However, if the above is right, then in an MP Merge-centric context this is a kind of operation we might expect to find. So, if we do find it, it constitutes an empirical argument in favor of the new way of looking at things. Thus, if SWM, then it is very interesting. Of course, this does not mean that it is right. It probably isn't. But it is very interesting and we should not try to get rid of it because it looks novel. Just the opposite. We should look for cases and see how they fare empirically. IMO, SWM analyses have been pretty insightful (e.g. Nunes on parasitic gaps, Uriagereka and Bobaljik and Brown on head movement, moi on adjunct control, and some new stuff on double object constructions whose authors must remain nameless for now). This stuff might all be wrong, but I find the analyses very interesting.
So, thx to David for 3 great lectures. High theory indeed! And also lots of fun. As I noted before, when the lectures become available on video I will link to them.