tag:blogger.com,1999:blog-5275657281509261156.post3285806281289316510..comments2024-03-28T04:04:55.806-07:00Comments on Faculty of Language: Formal Wear Part II: A Wider WardrobeNorberthttp://www.blogger.com/profile/15701059232144474269noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-5275657281509261156.post-6070038123965540962013-09-05T05:21:27.669-07:002013-09-05T05:21:27.669-07:00This comment has been removed by a blog administrator.Anonymoushttps://www.blogger.com/profile/08080711693129017959noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-19156076089161159912013-06-24T04:04:41.942-07:002013-06-24T04:04:41.942-07:00Like Thomas, I completely agree that computational...Like Thomas, I completely agree that computational complexity is an important and comparatively neglected topic in linguistics but I think one really needs to use the toolkit of fixed parameter tractability to get a handle on the complexity of parsing, because there is such a difference conceptually between the dependence on the length of the sentence, and the dependence on the size of the grammar. So you really need an analysis that uses more than one parameter. (multidimensional is the favoured term I think).<br /><br />If you accept Ristad's claim (I don't for technical reasons) that language is NP-complete in some sense, then you do have some explaining to do about how people can process long sentences. So presumably they can't if the number of some dependencies goes above some small number k -- or we could crack codes by asking people whether some very long sentence is grammatical -- in which case we have a possible solution by making k the parameter for some FPT analysis. k might be for example a bound on the number of distinct referents in a sentence.<br /><br />But just giving up and saying it's intractable doesn't seem like a viable research strategy, if you think that complexity is important. You would then need a much more articulated performance theory to fill the gap between grammaticality and acceptability if your competence grammar is not efficiently parseable, otherwise there will be a massive empirical problem with the predictions about these difficult sentences.<br /><br />I like Iris van Rooij's work on complexity in cognitive science, which is very relevant to this debate; I think I gave it a plug earlier on this blog.<br /><br />Alex Clarkhttps://www.blogger.com/profile/04634767958690153584noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-20656345299106305992013-06-22T17:05:15.789-07:002013-06-22T17:05:15.789-07:00I agree with the basic message of this post. There...I agree with the basic message of this post. There are many interesting properties of formalisms besides their weak generative capacity: strong generative capacity (the structural complexity of the generated tree language), learnability, computational complexity, and more recently also derivational complexity (the structural complexity of the derivation trees). Many talks at the major conferences in mathematical linguistics --- MOL, FG, and LACL --- explore these issues nowadays, rather than weak generative capacity. So it's good to see non-formalists being introduced to one of the more elegant pieces in our mathematical toolbox. <br /><br />That being said, the implications of computational complexity results should have been stated more carefully. To the uninitiated, it might seem like computational complexity theory frees us from the need to make formalization decisions: we simply look at a given empirical problem, e.g. binding theory, and then we show that any theory that captures the relevant patterns is powerful enough to solve some computational problem, say 3SAT. But this is not how it works. Instead, one has to formalize the empirical problem, and the complexity result is established with respect to this formalization. Just like linguists disagree on what the right theory of syntax, semantics etc. should be, they also disagree on what data their theory has to account for, and that necessarily limits the relevance of most complexity results. That's why the restriction to ``descriptively adequate'' theories strikes me as highly misleading here.<br /><br />Consider Ristad's binding theory results, for example. Binding is not regarded as one uniform phenomenon by most syntacticians. Many cases of variable binding are supposed to take place in semantics (donkey anaphora), some rules are clearly discourse-based (Rule I). Now if I am only interested in the syntactic aspects of binding, only a subpart of the obviation problem is relevant to me. Does Ristad's proof go through even if we consider only this subpart? Since his proof relies on weak cross-over and ellipsis, it breaks down as soon as 1) one of the two is not purely syntactic in nature, or 2) not all of the data he uses involves only syntactic binding. I do not believe that a purely syntactic theory of ellipsis is feasible, and I also have my doubts about the second point. In light of this uncertainty, I think it is preferable to squeeze as much of binding theory into your syntactic formalism as possible and see how much that gets you (here's a <a href="http://www.thomasgraf.net/doc/papers/tag2012_binding.pdf" rel="nofollow">recent attempt by me and Natasha Abner</a>). As a matter of fact, that is pretty much the standard procedure for all of generative grammar, in particular Minimalism. Beyond some general conceptual claims, Minimalism isn't a theory of language, it's a theory of syntax, so it can be descriptively adequate for its object domain without being ``in the same complexity soup''.<br /><br />Two closing remarks:<br />1) Besides the conceptual points raised above, Ristad's obviation work has also been<br /><a href="http://dl.acm.org/citation.cfm?id=976611" rel="nofollow">challenged on technical and empirical grounds</a>. The ensuing back and forth can be found <a href="http://dl.acm.org/ft_gateway.cfm?id=979729" rel="nofollow">here</a>.<br />2) To the best of my knowledge the term ``concealed reference-set computation'' (fn. 1) has only been used in my transducer treatment of reference-set constraints (and that's also the only thing spit out by Google). Is this a concealed reference to said work? If so, I am at a loss as to what the connection might be to the complexity theory results presented here.Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.com