Sunday, February 9, 2014

Where Chris Collins enters the fray

Chris sent me this longish response to some of what has appeared in the blog. In the hope of getting him to become a regularish participant in the ongoing discussions I here post his "Response to Norbert." I feel that he let me off lightly, actually. But this said, I think that I can still finds some points to disagree with. I will restrict these to the comments section and hand the floor over to him. Thx Chris.

*****

Response to Norbert

I read with interest Norbert’s recent post on formalization: “Formalization and Falsification in Generative Grammar”. Here I write some preliminary comments on his post.  I have not read other relevant posts in this sprawling blog, which I am only now learning how to navigate. So some of what I say may be redundant. 

For me the quote by Frege in the Begriffsschrift (pg. 6 of the book “Frege and Godel”) indicates what is important when he analogizes the “ideography” (basically first and second order predicate calculus) to a microscope: “But as soon as scientific goals demand great sharpness of resolution, the eye proves to be insufficient. The microscope, on the other hand, is perfectly suited to precisely such goals, but that is just why it is useless for all others.” Similarly, formalization in syntax is a tool that needs to be employed when needed. It not an absolute necessity and there are many ways of going about things (as I discuss below). By citing Frege, I am in no way claiming that we should aim at the same level of formalization that Frege did.

There is an important connection with the ideas of Rob Chametzky (posted by Norbert) in another place on this blog. As we have seen, Rob divides up theorizing into meta-theoretical, theoretical and analytical.  Analytical work, according to Chametzky is: “concerned with investigating the (phenomena of the) domain in question. It deploys and tests concepts and architecture developed in theoretical work, allowing for both understanding of the domain and sharpening of the theoretical concepts.” It is clear that more than 90% of all linguistics work (maybe 99%) is analytical, and that there is a paucity of true theoretical work.

A good example of analytical work would be Chomsky’s “On Wh-Movement”, which is one of the most beautiful and important papers in the field. Chomsky proposes the wh-diagnostics and relentlessly subjects a series of constructions to those diagnostics uncovering many interesting patterns and facts. The consequence that all these various constructions can be reduced to the single rule of “wh-movement” is a huge advanced, allowing one insight into UG. Ultimately, this paper lead to the Move-Alpha framework, and indirectly to Merge (the simplest and most general operation yet).
However, “On Wh-Movement” is what I would call “semi-formal”. It has semi-formal statements of various conditions and principles, and also lots of assumptions are left implicit. As a consequence it has the hallmark property of semi-formal work: there are no theorems and no proofs. Formalization is stating a theory clearly and formally enough that one can establish conclusively (i.e., with a proof) the relations between various aspects of the theory and between claims of the theory and claims of alternative theories.

Certainly, it would have been a waste of time to fully formalize “On Wh-Movement”. It would have expanded the text 10-20 fold at least, and added nothing. This is something that I think Pullum completely missed in his 1989 paper on formalization. The semi-formal nature of syntactic theory, also found in such classics as “Infinite Syntax” by Ross and “On Raising” by Postal, has led to a huge explosion of knowledge that people outside of linguistics/syntax cannot really understand (hence all the lame discussion out there on the internet and Facebook about what the real accomplishments of generative grammar have been), in part because syntacticians are not very good popularizers.
Theoretical work, according to Rob is:  is concerned with developing and investigating primitives, derived concepts and architecture within a particular domain of inquiry.” There are many good examples of this kind of work in the minimalist literature. I would say Uriagereka’s original work on multi-spell-out qualifies and so does Epstein’s work on c-command, amongst others.

My feeling is that theoretical work (in Chametzky’s sense) is the natural place for formalization in linguistic theory. The reason is that it is possible, using formal assumptions to show clearly the relationship between various concepts, assumptions, operations and principles. For example, it should be possible to show, from formal work, that things like the NTC and Extension condition should really be thought of as theorems proved on the basis of assumptions about UG.  Since NTC and Extension condition are theorems, they can actually be eliminated from UG. And from this, one can wonder if that program can be extended to the full range of what syntacticians normally think about as constraints.
In this, I agree with Norbert who states: “It can lay bare what the conceptual dependencies between our basic concepts are.” Furthermore, as my previous paragraph makes clear, this mode of reasoning is particularly important for pushing the SMT forward. How can we know, with certainty, how some concept/principle/mechanism fits into the SMT? We can formalize and see if we can prove relations between our assumptions about the SMT and the various concepts/principles/mechanisms. Using the ruthless tools of definition, proof and theorem, we can gradually whittle away at UG, until we have the bare essence. I am sure that there are many surprises in store for us. Given the fundamental, abstract and subtle nature of the elements involved, such formalization is probably a necessity, if we want to avoid falling into a muddle of unclear conclusions.

A related reason for formalization (in addition to clearly stating/proving relationships between concepts and assumptions) is that it allows one to clarify murky areas. One of the biggest such areas nowadays is whether syntactic dependencies make use of chains, multi-dominance structures or something else entirely (maybe nothing else). Chomsky’s papers, including his recent ones, make references to chains at many points. But other recent work invokes multi-dominance. What are the differences and relations between these theories and are either of them really necessary? What assumptions about UG does multi-dominance or chains entail? I am afraid that without formalization it will be impossible to answer these questions. I am investigating these questions in my seminar this semester.
These questions about syntactic dependencies interact closely with TransferPF (Spell-Out) and TransferLF, which to my knowledge, have not only not been formalized but not even stated in an explicit manner. Investigating the question of whether multi-dominance, chains or some something else entirely (perhaps nothing else) is needed to model human language syntax will require a concomitant formalization of TransferPF and TransferLF, since these are the functions that make use of the structures formed by Merge.

Minimalist syntax calls for formalization in a way that previous syntactic theories did not. First, the nature of the basic operations is simple enough (e.g., Merge) to make formalization a real possibility. The baroque and varied nature of “transformations” in the “On Wh-Movement” framework and preceding work made the prospect for a full formalization more daunting.

Second, the nature of the concepts involved in minimalism, because of their simplicity and generality (e.g., copies, occurrences), are just too fundamental and subtle and abstract to resolve by talking through them in an informal or semi-formal way. With formalization we can hope to state things in such a way to make clear conceptual and empirical properties of the various proposals, and compare and evaluate them. In fact, I have recently being doing a lot of this with my colleagues, because only recently (by helping to write Collins and Stabler 2012) have I seen what the issues are.
So, in the spirit of Frege, formalization should be a tool for ordinary working syntacticians to clarify their ideas and examine them empirically and conceptually.


11 comments:

  1. Another way of thinking about formalization is that a syntactic theory is 'sufficiently formalized' when somebody can produce a toy (not practical, industrial strength) implementation without having to do make any theoretical decisions, only ones concerned with programming implementation. LFG met this standard starting from the 1982 book version; I'll make no proclamations about other frameworks.

    What's going on in such cases, I suggest, is that the heavy lifting of formalization is done by the people who created the programming language, who the linguist-toy implementer can just stand upon the shoulders of.

    ReplyDelete
  2. Very interesting comments, thank you Chris. I'll leave the technical issues for others to debate but have a question about "all the lame discussion out there on the internet and Facebook about what the real accomplishments of generative grammar have been". As one of those "people outside of linguistics/syntax" I attempt to inform myself about such accomplishments by reading what experts on GG publish. And what more competent than Chomsky could one imagine? Now in his 2012 'The Science of Language' he provided an explanation I cite below:

    "JM: Noam, let me ask about what you take to be your most important contributions. Do you want to say anything about that?
    NC: Well, I think that the idea of studying language in all its variety as a biological object ought to become a part of future science - and the recognition that something very similar has to be true of every other aspect of human capacity. The idea that - there was talk of this in Aspects, but I didn’t really spell it out - the belief ...
    [Wait; I’ll start over. B. F.] Skinner’s observation is correct that the logic of behaviorism and the logic of evolution are very similar - that observation is correct. But I think his conclusion - and the conclusion of others - is wrong. Namely, that that shows hat they’re both correct. Rather, it shows that they're both incorrect, because the logic of behaviorism doesn’t work for growth and development, and for the same reason, the notion of natural selection is only going to work in a limited way for evolution. So there are other factors. As I said in Aspects, there’s certainly no possibility of thinking that what a child knows is based on a general procedure applied to experience, and there’s also no reason to assume that the genetic endowment is just the result of various different things that happen to have happened in evolutionary history. There must be further factors involved - the kind that Turing [in his work on morphogenesis] was looking for, and others were and are looking for. And the idea that maybe you can do something with that notion is potentially important. It's now more or less agreed that you can do something with that notion for, say, bacteria. If you can also do something with it for the most recent - and by some dimension most complex - outcomes of evolutionary history like language, that would suggest that maybe it holds all the way through." (Chomsky, 2012, 76)

    I found this rather surprising [because I DO believe generative grammar has produced considerable accomplishments along the lines you lay out above]. So maybe you can help me to understand why Chomsky did never mention any of the work on syntax that he either contributed himself or that was made possible as a result of his contributions? Why is he talking instead about mysterious 'other' factors and a notion with which one can do something for say bacteria'? It seems rather remote from work on syntax...

    ReplyDelete
  3. I think I disagree with your characterization of 'On WH Movment' as a piece of analytical work. I would have placed it solidly in the theory group, albeit with quite a bit "application" to empirical data. The aim of the paper was to unify the various kinds of constructions that displayed island effects. It does this by factoring out a common Move component and proposing that Subjacency effects are diagnostic of movement. It then reanalyses some long distance effects proposed earlier by Bresnan as deletion and argues that it MUST be movement given the theoretical proposal and finds ancillary evidence to support this. Last of all, the argument is driven by higher level theoretical concerns relating to computational complexity. For these reasons I would catalogue things differently than you do.

    I think that I would also disagree that formalization is required (note, I don't say it might not be helpful, only that it need not be) when dealing with fundamental assumptions, e.g. basic operations in MP. Do you have any concrete examples where formalizing has helped advance our understanding of the basic concepts? The one discussion I found helpful was Kobele's observation that there is little difference between agree and merge based accounts. I had come to a similar conclusion less formally as it was pretty clear that if one thinks of Move as Agree+EPP then this can be traded pretty much one for one with a theory that says there is no Agree just Merge with EPP signaling whether a top or bottom copy is pronounced. At any rate, maybe a few examples of where formalization really helped clarify muddy points would be useful. I am curious as to what you have in mind.

    ReplyDelete
    Replies
    1. @Norbert: I'm genuinely surprised by your categorization of on wh mvt as theory, given your short summary of it (which I agree with completely). Let us reformulate your description as follows (in a way that I think is innocent): the paper notes that many constructions have cluster of properties P. It proposes that this cluster of properties come from a single source. This predicts that when some of this cluster of properties are present, we should expect to find the others. It looks at cases where some but not all of these properties are known to be present, and discovers that the rest are as well.
      It seems to me that (something like) this description could be given of most work in linguistics.

      I'm glad you found my discussion helpful! As another example of explicitness being potentially useful, note that at the time my dissertation was written, the movement theory of control was being vigorously debated. I formalized it, and gave an explicit fragment for raising, control, passivization, and expletives in English. I showed how the MTC could deal with promise/persuade type verbs (which was problematic at the time), and how movement for theta and movement for case did not have to collapse (which was also problematic at the time).

      Delete
    2. @Greg: I'm glad that I can surprise you. Here's why I said what I did. I see On Wh Mvmt as a package with the unification project of islands to subj. This is the last piece, where a common operation is factored out and defined to be subject to subj. So in my mind, On Wh Mvmt and Conditions form a single theoretical package. It's the unification angle that I liked, and still think is the main contribution of the paper. It shows that such a unification can do a lot of work. There is also some interesting discussion of what motivates the WH mvmt+ Subj account on p, 89, I think it is. The discussion tries to give a higher order theoretical motivation for the analysis. I agree that there is lots of good analytic syntax there. But the main contribution of the paper was a theoretical one IMO: it completed the unification of islands with movement.

      I did find your discussion very helpful. I still think that your observations concerning the trading relation between movement and agree and that they are different ways of bringing info forward in time is an excellent description of what both do and that there is little to choose between them. I, of course, loved your discussion of the MTC (but from misplaced modesty decided not to mention that case, thx for bringing it up). However, I would put your contribution in a slightly different light; it found a way of implementing Larson's basic idea concerning 'promise.' He did a lot of the leg work behind your reconciliation, as you noted. I personally do not think that this is the right way to go given that the 'promise' cases are part of a larger set of problems witnessed in control shift phenomena. Cedric, Jairo and I discuss these and present a unified analysis in our control book. That said, yes, I liked your discussion too.

      Last point: where I think that formalization has played a very useful role, at least for me, is in showing when two "theories" are actually notational variants. Showing that two approaches are isomorphic wrt the analysis of a given phenomenon, or even in general, is a big contribution that formalization can make to our understanding of UG. One of the things I most liked about your thesis is that you were always very clear about when two proposals were really different and when not. We tend to undervalue work that shows notational variance. I am not sure why as there is nothing wrong with two apparently different approaches converging on a common account. In fact, one might be tempted to say that this suggests that there is something interesting about the convergence. More often than not, IMO, the similarities are pretty easy to spot, but it is always nice to see the "eyeball" method confirmed formally.

      BTW, I don't recall that movement for case and theta role collapsing being a problem, but maybe I have put this all down the memory hole.

      Delete
  4. "For example, it should be possible to show, from formal work, that things like the NTC and Extension condition should really be thought of as theorems proved on the basis of assumptions about UG. "

    Could you amplify that? That is a really interesting idea, but I don't see what sort of assumptions would give rise to those consequences.

    ReplyDelete
  5. Alex -- you will have to read Collins and Stabler 2013. If you send me your e-mail address, I will send it to you. Unfortunately, I do not know how to exchange e-mails on this blog.

    ReplyDelete
    Replies
    1. @Chris: "These questions about syntactic dependencies interact closely with TransferPF (Spell-Out) and TransferLF, which to my knowledge, have not only not been formalized but not even stated in an explicit manner."
      I'm a little surprised to read this! Certainly, in Ed's Minimalist Grammar framework, these matters have long been formalized. Building on work from Michaelis/Harkema (both 2001) I showed (in 2006/7) explicitly how to dissociate the derivation and what you are calling `transfer' (both LF and PF).
      And the NTC and extension conditions were discussed in the previous blog post (by Thomas) in the context of minimalist grammar derivations, and have long been `folklore'.

      Delete
  6. On a general note, the version of my comments that Norbert posted was a draft. He will post the final version tonight. We miscommunicated about this.

    ReplyDelete