tag:blogger.com,1999:blog-5275657281509261156.post8705609190672387249..comments2024-03-28T04:04:55.806-07:00Comments on Faculty of Language: Defeating linguistic agnotology; or why very vigorous attack on bad ideas is importantNorberthttp://www.blogger.com/profile/15701059232144474269noreply@blogger.comBlogger76125tag:blogger.com,1999:blog-5275657281509261156.post-57866096892006017882022-04-30T01:30:01.434-07:002022-04-30T01:30:01.434-07:00Thank you for the helpful blog! Linguistic experti...Thank you for the helpful blog! <a href="https://www.acadestudio.com/resources/linguistic-experts-resources/" rel="nofollow">Linguistic expertise </a> is indeed a growing requirement in today’s global world. Acadestudio is trusted for its impeccable linguistic services at affordable prices.<br />John Antonyhttps://www.blogger.com/profile/03562546652968032807noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-86397198051661297912022-04-14T05:31:09.960-07:002022-04-14T05:31:09.960-07:00Thank you for the informative blog! Language local...Thank you for the informative blog! Language localization is indeed an important factor in ensuring smooth communication in organizations. When looking for <a href="https://www.acadestudio.com/resources/linguistic-experts-resources/" rel="nofollow">Linguistic experts services </a>, Acadestudio is the perfect choice! Our linguistic experts are masters in 250+ native and international languages. <br />John Antonyhttps://www.blogger.com/profile/03562546652968032807noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-86865899457103836572020-09-13T23:28:19.927-07:002020-09-13T23:28:19.927-07:00For More Information Visit Here -- Letter Linguist...For More Information Visit Here -- <a href="https://letterlinguists.com/" rel="nofollow">Letter Linguists Services</a><br /><br />Editing & proofreading your credibility and level of professionalism are reflected in your online footprint. LETTER Linguists Services are passionate about creating brand content for small- to medium-sized operators in the tourism industry, as well as businesses that prefer not to run with the pack.Traffic Rankerhttps://www.blogger.com/profile/03410015834992095712noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-49947222911400216482015-08-10T21:07:49.963-07:002015-08-10T21:07:49.963-07:00Hah, I read this post belatedly just now, and was ...Hah, I read this post belatedly just now, and was going to pull this quote out myself. A delight to see someone beat me to it!Erich Groathttps://www.blogger.com/profile/01792208231231133579noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-11218594138279930892015-04-01T17:52:58.925-07:002015-04-01T17:52:58.925-07:00Saying a bit more about this, one might regard the...Saying a bit more about this, one might regard the role of generative grammar as the construction of a special purpose programming language (most like a logic programming language) for writing grammar implementations, where the empirical, explanatory aspect of the project is to try to make it as true as possible that the grammars you write on the basis of relatively small naturalistic samples of the language will work out for larger samples, especially those containing complex and therefore rare structure.<br /><br />Furthermore, most of the actual discussion is about what the basic principles ought to be rather than the actual syntax of the finished language, and the Chomskian camp criticizes some of the others (eg the LFG community) for making too many premature decisions about the latter. This a perfectly rational criticism, regardless of whether any particular person chooses to be moved by it or not. That the Chomskian proposals are too obscure, internally contradictory etc. is also a rational criticism, which sensible people can choose to put aside.AveryAndrewshttps://www.blogger.com/profile/17701162517596420514noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-56379923930481033642015-03-31T20:43:31.056-07:002015-03-31T20:43:31.056-07:00@Thomas:
Notation is, for me (and, I believe, Cho...@Thomas: <br />Notation is, for me (and, I believe, Chomsky 1957, 1965 and numerous other writings from the early period at least, and also at least some contemporary theories such as LFG) exactly what you write down as a grammar to produce a given language. It has usually consisted formally (when it exists at all, as in the XLE-LFG system or the SPE Phonological Rule formalization) of a combination of compiler and interpreter so that the compiler converts what the linguist writes ('S -> NP, VP', etc. for LFG (comma meaning free ordering of NP and VP) into something the interpreter uses to parse sentences or produce them on the basis of some kind of semantic input. So, on this account, it would be in principle possible for two theories to have very different innards for their interpreter (one with movement as classic derivational movement, another with no movement but some kind of reentrancy scheme) but to be the same theory because due to how other things worked, they defined exactly the same correspondence between grammars and languages.<br /><br />re 1: I thought that Greg was reminding us that there is in fact no notation, interpreted as a compression scheme for the data, that is optimal for *all* situations. This is I believe a math fact, tho the details of the proof are beyond me. But it seems plausible. The difficulties of coming up with one notation for all natural language grammars is however a different issue. And of course working out optimally for complete grammars of all languages is a very tough problem (!!!). As the discussion here of CG goes ... if CCG had worked out and attractive analyses of case in Icelandic and Kayardild, and word order etc. in Modern Greek, I would surely become a practitioner ...<br /><br />re 2: The nonuniqueness problem is unlikely to go away any time soon, but some ideas can nevertheless be dismissed as producing worse results for a very wide range of phenomena, I think. For example the idea discussed at the beginning of Pesetsky 2013 (Russian Case: Morphology and the Syntactic Categories) of dispensing with morphosyntactic features such as case in favor of direct reference to the actual morphemes used to construct the case forms. Constructing something that functions at a minimal level for all somewhat decently described linguistic phenomena has become an extremely exacting task, and there are not likely to be many serious candidates in the ring at the same time. LFG has a fair number of implemented wide coverage grammars ATM, Minimalism a lot of ideas, many of which are extremely interesting and often seem to explain things that LFG doesn't handle very well (the concentric nature of nominal modification, for example). So the comparisons that people want to make are often of unlike things, and putting the pieces together is hard.<br /><br />I view the 'succinctness' criterion as being an attempt to say something about what is recognized as a generalization by the LAD, so that the intuitions about elegance etc. should be taken as hunches about what will work out best in the longer term. It is a notable fact that linguists often talk about predictions, etc., but actually very rarely make concrete claims, let alone demonstrate them, so the program is really not very far advanced in those terms, although the amount of insightful description that has been produced increases relentlessly, it seems to me, & I it will come together at some point. Meanwhile CB and DE point out that in many respects, this hasn't happened yet, which is not a problem by my lights, and disagreements about what is 'best' are surely inevitable.<br />AveryAndrewshttps://www.blogger.com/profile/17701162517596420514noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-35923169063456651242015-03-31T16:52:45.711-07:002015-03-31T16:52:45.711-07:00@Avery: I'm not quite sure what exactly you me...@Avery: I'm not quite sure what exactly you mean by notation (are we just talking about things like, say, the choice between features and constraints, or does even CCG vs TAG fall under that?). But irrespective of that, I think Greg was actually pointing out two distinct but closely related issues:<br /><br />1) it is fairly easy to design a formalism that is "better than reality" if you only have to account for a subset of the data. For example, a formalism that only handles local processes is simpler than one that also handles non-local ones. In this case it is easy to see that the former is too weak because we know that non-local processes exist, but there's many other properties of language we do not know, many so abstract that we do not even realize that we do not know them. Our picture of the empirical facts is always incomplete, and that means that the "wrong" formalism can win out against the "right" one. So claims like "formalism X has a much more elegant analysis" come with the major caveat that this might be possible only because the formalism completely fails for another construction that nobody's looked at yet.<br /><br />2) Even if we had a full picture of all the facts, there is no guarantee that there is a unique solution. Formalism A may be better for phenomon X while formalism B is superior for phenomenon Y. So which one do you pick in this case? You might want to bring in psycholinguistic or neurological evidence, but then you need a linking hypothesis, and there you've also got several to choose from. Suppose you have a formula a*m + b*n = 1, where a and b are indicators for the relative importance of m and n. Then you can't give me a unique solution unless the weights a and b are fixed, but we have no way of fixing them.<br /><br />These problems arise exactly because generative grammar puts a strong emphasis on notation and succinctness (a trade-off between grammar complexity and structural complexity, with the weights of each factor mostly determined by personal taste). A physicist just needs a translation between different theories so that they can pick whatever is easier to use for a given problem. That simply doesn't make any sense for most generative grammarians because the grammar <i>as specified</i> is taken to be a real object of human cognition. In combination with the common assumption that everybody uses the same grammar (rather than my brain running MGs and yours LFGs), this means that there can be only one true grammar formalism, which is the main non-utilitarian motivation for succinctness criteria.Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-36141073346981780132015-03-31T15:29:04.885-07:002015-03-31T15:29:04.885-07:00Greg: "Ideally, we'd have a single formal...Greg: "Ideally, we'd have a single formalism which gave the most elegant analyses possible to everything (the best of all worlds). I doubt that this is possible, given the well-known results in the computer science literature about the succinctness gains of more powerful descriptive apparatuses over less powerful ones. ..."<br /><br />But, generative grammar includes (or maybe is even based on) the idea that there are facts of the matter about what kinds of generalizations are possible in language (a conceptual necessity for timely learning, with various empirical observations relevant to what kinds of generalizations we don't need to provide for (e.g. moving clitics to in front of the last word of a clause)), and also I think a further claim that we can produce an optimal notation for those generalizations, with messy things expllicable in terms of historical or cultural factors. E.g. the massive near-duplications in English vocab as a consequence of the Norman Conquest, or the inhibitions on (classic, 'recursive symbol' (Bach 1964/74)) recursion in Piraha as a consequence of a strong cultural inhibition against fancy linguistic performances. We haven't found this notation yet, and maybe it doesn't exist (but there has been progress, such as the revisions to how passive constructions work), but the project is not to find a notation that works for everything, only that works for languages naturally learnable by humans.AveryAndrewshttps://www.blogger.com/profile/17701162517596420514noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-84091785581692667112015-03-31T14:58:05.905-07:002015-03-31T14:58:05.905-07:00thx, Greg & Matthew.thx, Greg & Matthew.Omerhttps://www.blogger.com/profile/06157677977442589563noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-16072487787783565222015-03-31T14:57:19.288-07:002015-03-31T14:57:19.288-07:00This comment has been removed by the author.Omerhttps://www.blogger.com/profile/06157677977442589563noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-24462721425387588662015-03-31T09:38:33.563-07:002015-03-31T09:38:33.563-07:00@Omer,
I can only think that you want to specify,...@Omer,<br /><br />I can only think that you want to specify, lexically, that ‘the’ <em>has</em> to form a constituent with a following noun (does this cause problems elsewhere?), but ‘three’ doesn't—so then ‘convinced three’ is a possible constituent but ‘convinced the’ isn't. Whether you do this in terms of a unary modality, or in terms of the difference between a non-associative vs associative merge mode, seems to be largely a matter of taste.<br /><br />the:= []-1np/n<br />three:= np/n (or (s/(np\s))/n or whatever)<br /><br />or<br /><br />the:= np /i n<br />three:= np /j n<br /><br />This takes us beyond the Lambek calculus, but that's pretty much inevitable given what Greg has been saying: the Lambek calculus both undergenerates and overgenerates.Anonymoushttps://www.blogger.com/profile/07342391408412861663noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-67667774656968569732015-03-31T08:15:32.696-07:002015-03-31T08:15:32.696-07:00@Omer: There is no general `hypothetical reasoning...@Omer: There is no general `hypothetical reasoning approach'; there are a huge slew of grammatical frameworks which make use of a hypothetical reasoning operation. You are right that in the lambek calculus, with the standard lexical type assignments (Det := NP/N, convince := (NP\S)/S/NP), the above sentence is generable. I don't know how practitioners of other such frameworks would respond to the relative unacceptability of your example.<br /><br />I think that the general point you are raising is a right one though: there is a constant interplay between theory development and its testing against data. What advantage one sort of approach might seem to have today in empirical domain X might disappear tomorrow once more becomes known about X. This is one reason why I am happy to see lots of people working in different traditions.Greg Kobelehttps://www.blogger.com/profile/08006251459440314496noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-41645729185989415462015-03-31T07:32:06.792-07:002015-03-31T07:32:06.792-07:00@Greg: Just out of curiosity (and ignorance) – how...@Greg: Just out of curiosity (and ignorance) – how would the Hypothetical Reasoning approach deal with the ungrammaticality of (i)?<br /><br />(i) * John convinced three, and Mary convinced the, men that they should leave.<br /><br />Omerhttps://www.blogger.com/profile/06157677977442589563noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-23774978410230556742015-03-31T07:27:35.567-07:002015-03-31T07:27:35.567-07:00@Norbert: I agree with you that the revisionist pi...@Norbert: I agree with you that the revisionist picture I sketched would not have engendered the enthusiasm of the actual introduction of the MP. <br />My point was merely that, with the benefit of hindsight, the shift from GB to MP can be imho viewed as a very simple formal tweak (with important consequences, formal and otherwise). The rhetorical point of my statement was to provide a formal basis for the feeling of continuity between GB and MP which was being asked about above. <br />My main point was that, in response to Alex's question about why GB-MP is felt to be `closer' than GB-TAG/HPSG/LFG/etc, the formal tweak which derived MP from GB meant that much of the analytical style and methodological assumptions of the latter could be recast without much more than notational changes into the former. (Sometimes notational changes are important.) I don't think that there is much computational similarity between the two, just as I don't think that there is much computational similarity between ((R)E)ST and GB.<br /><br />@Christina: I think you are right that `flexible constituency' effects are a real selling point of categorial-type grammars; they've got the best analysis, hands down. Of course, there are alternative analyses in other frameworks, across-the-board movement is an obvious choice in transformational-style theories, but in comparison to CG this may seem something like an usine a gaz. Transformational theories have imho some of the most elegant accounts of GF changing constructions, of wh-constructions, elliptical constructions, and many others.<br /><br />I do not think that your metric of simplicity is appropriate, however. (TL/DR: you need to consider the ability of the framework to handle all constructions, not just a single one) It is well known that classical CG, and the Lambek Calculus, are too weak to describe the patterns of natural language. Extensions thereof, like CCG multi-modal CG, lambek-grishin calculus, displacement calculus, hybrid type-logical CG, etc, are strictly more complicated than vanilla CG. Some of these are more complicated than MG (the formal version of MP), and some (CCG) are less complicated than MG. Ideally, we'd have a single formalism which gave the most elegant analyses possible to everything (the best of all worlds). I doubt that this is possible, given the well-known results in the computer science literature about the succinctness gains of more powerful descriptive apparatuses over less powerful ones. As long as we are aiming for a restrictive theory of grammatical description, we will most likely be forced to trade off descriptive elegance in one construction for descriptive elegance in others.Greg Kobelehttps://www.blogger.com/profile/08006251459440314496noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-86242218999961152382015-03-31T06:44:50.138-07:002015-03-31T06:44:50.138-07:00Why, thank you, Kleanthes, for sharing your emotio...Why, thank you, Kleanthes, for sharing your emotional responses so openly. Presumably, you are able to empathize with people on 'the other side' who are similarly affected by Norbert's ongoing put-downs as you are by my sarcasm [you did realize I was being sarcastic, I hope]. As for my alleged ignorance of the technical literature: rest assured I am a lot more familiar with the technical literature of your side than you seem to be with the technical literature of 'the other side' [judging by the citations in your papers]. So your paternalism is a tad out of place...<br /><br />@Thomas: thank you for going through the effort of typing your answer twice - much appreciated.Anonymoushttps://www.blogger.com/profile/03443435257902276459noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-65872295021942486002015-03-30T10:55:33.063-07:002015-03-30T10:55:33.063-07:00@CB: You mentioned in the past that you lacked lin...@CB: You mentioned in the past that you lacked linguistic training, and at the beginning nobody objected to your activity here. There were even lots of reading recommendations for you, if I remember well. Yet you kept hammering at everything that moved in this blog, and obviously without having read (or understood) the technical details of linguistic theory which the people in this blog have been involved in, some even for several decades, and with some even shaping the technical developments. <br /><br />Even recent hints that instead of "reviewing" books and spending your time on social media playing the "critic" you should actually write and publish original research went right by you. What you have "contributed" to this blog is a lot of anti just for the sake of being anti, very few original thoughts, and certainly nothing well founded. I think the majority of the responses show that I'm not alone feeling this way.<br /><br />However, your last posting is just taking the p*** out of every reader's last bit of patience:<br /><br />"e.g. what does Norbert's elaboration on degrees of disappearance of DS have to do with continuity of one computational property during the the ST -- REST -- GB -- MG sequence? It IS of course curious that back in 1995 Chomsky celebrated the elimination of DS as a massive intellectual achievement of MP but that, apparently, in 2015 DS still has not been completely eliminated. In case Norbert wanted to draw our attention to this curious fact"<br /><br />I will NOT do you the favor and take your complete ignorance apart here. You may get some more serious responses by people who may think that you actually read up on technical details, but before you now engage on actual technical components of syntactic theory, about which you know nothing and so far obviously haven't read or understood anything, I suggest you take a gracious exit so that we can actually discuss issues that WE care about. You can then post your contrary views on Facebook and leave us alone.Kleantheshttps://www.blogger.com/profile/03156844640062752401noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-22837678732167065882015-03-30T10:24:21.172-07:002015-03-30T10:24:21.172-07:00I have no problem with your description of Greg...I have no problem with your description of Greg's point. What I was reacting to was the idea that MP amounted to "throwing away the top part and adding an arc from the mover to the target." This may be some of what Minimalism amounted to but it does not exhaust it, nor in my opinion does it really get to most of the interesting technology. Maybe this is what made MGs interesting for some, but it is not what grabbed the attention of many syntacticians. Now maybe we should not have been impressed with the copy theory, Extension, various feature checking algorithms. But as they were what we used to derive she of the observed universals, this is what grabbed many people's attention. So, for example, it was the link between theta roles and Phrase structure that got me interested in the possibility of control as movement. So, my reaction to Greg's point was a small one: what he highlighted, the elimination of DS, was perhaps the most interesting formal feature of the move from GB, but it was not what many MPers would point to and the reason was that the elimination of DS was less complete than advertised. Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-87688735304707489922015-03-30T09:41:47.213-07:002015-03-30T09:41:47.213-07:00@Tim: Greg's remark is purely formal. He's...@Tim: Greg's remark is purely formal. He's saying that a GB derivation consisted of two parts, a top part that's exclusively Move steps, and a bottom part that's exclusively lexical items and Merge steps. Each Move node is related to two positions in the Merge half, the mover and the target of Movement. The step from GB to Minimalism, then, amounts to throwing away the top part and adding an arc from the mover to the target, which provides all the information you need to know.<br /><br />Note that this has absolutely nothing to say about whether movers must start out in theta positions. If you have an MG derivation where they don't, you can translate it book into a GB derivation where they don't, and the other way round. The only thing that falls out immediatley is that a mover cannot move before it has been merged, so you have an implicit "merge before you move" restriction for each moving LI. The same is not true for the LI at the traget site:with lowering movement, for example, you can have a Move step that is implicitly satisfied by a Mover that still needs to be merged. That's why lowering is very similar to hypothetical reasoning.Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-69215087690838651762015-03-30T09:31:06.365-07:002015-03-30T09:31:06.365-07:00Well, it looks like the first comment made it afte...Well, it looks like the first comment made it after all, but I'll keep the shortened version around anyways since it does a better job at bringing out the core idea that drives all the different implementations.Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-78665069341604624122015-03-30T09:24:05.235-07:002015-03-30T09:24:05.235-07:00@Christina: My previous comment got eaten by a lov...@Christina: My previous comment got eaten by a lovely 503 service disruption, so here's the abridged version.<br /><br />There's many different ways to get flexible constituency, some involve Move, some Merge, but they all follow the same strategy for the example you gave: discharge the third argument position in some way, and then insert the third argument at the target position. Discharging the third argument could take the form of adding a new entry for <i>gave</i> that only needs two arguments; or merging an empty head as the third argument; or replacing Merger of that argument by an instance of sideward movement of the third argument from some other position into the argument position. Insertion at the target position is either achieved directly via base merger there, or movement into that position. All those accounts look pretty much the same from a derivational perspective in that they involve a dependency between the target site and the two argument positions. They only differ in how this dependency is encoded.<br /><br />Since you mentioned hypothetical reasoning, let me add that <a href="http://home.uchicago.edu/~gkobele/files/Kobele10AAbar.pdf" rel="nofollow">hypothetical reasoning can be added to MGs without an increase in strong generative capacity</a>. Essentially, it's just a type of covert downward movement.Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-91610395106672737662015-03-30T09:11:10.487-07:002015-03-30T09:11:10.487-07:00@Christina: The analysis is pretty much the same, ...@Christina: The analysis is pretty much the same, and is implemented in two steps. First you add new lexical items to the lexicon that allow you to Merge <i>Terry</i> and <i>Robin</i> with <i>gave</i>, without the need for a third argument. Said third argument is directly merged in its target position. That gets you the flexible constituency, but also overgenerates because now the grammar generates <i>Robin gave Terry</i> as well (which strictly speaking is syntactically well-formed and only semantically odd, but that's beside the point here).<br /><br />In order to get rid of overgeneration, you define a regular tree language that contains only the derivations with non-flexible MG constituencies and the desired flexible CG constituencies. Via a specific algorithm this regular tree language is directly precompiled into the grammar as a filter on derivations, thus giving you a grammar with flexible CG constituency.<br /><br />An alternative route is to lift incomplete constituents to complete ones by merging empty heads in empty argument slots and limiting the distribution of these empty argument fillers. Not much of a difference, but it's more transparent, doesn't require as many lexical items, and highlights the parallel between hypothetical reasoning in CGs and movement in MGs. Greg has some insightful observations on this in <a href="http://home.uchicago.edu/~gkobele/files/Kobele10AAbar.pdf" rel="nofollow">his treatment of the A-A' distinction</a>, although he doesn't make an explicit connection to CG, I think.<br /><br />All these derivations look awfully similar, and they're also very close to those with across-the-board movement, or sideward movement of the third argument, and so on. They're all just ways of satisfying an argument position by a phrase that is realized somewhere else.<br /><br />What the first two strategies I mentioned have in common is that they replace movement of the third argument by Merge, and as such they will fail whenever remnant movement is involved. So if there are linguistic constructions where CCG posits flexible constituency and a sequence of combinatioral operations that correspond to remnant movement, we also have to integrate multiple Move steps with the flexible constituency in some fashion. That can be done, but since I don't know of any such cases I can't tell you what exactly one would have to take care of.Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-11677217552600331982015-03-30T08:55:58.418-07:002015-03-30T08:55:58.418-07:00Just a short addition to the discussion about elim...Just a short addition to the discussion about elimination of d-structure. There was also the empirical argument from 'tough'-movement constructions that it was not possible to have all theta-assignment preceding all movement. In other words, sometimes theta-assigners are created by movement. In combination with the assumption that theta roles are assigned at first merge (the "residue" that Norbert mentioned), this forces us to allow move and merge steps to be interspersed.<br /><br />I'm not sure exactly how this relates to Greg's conception of the shift to MP: "observing that all the grammatically relevant information involving move was contained in the reentrant arcs" sounds like it makes it a purely formal, rather than empirical shift ... is that right?Tim Hunterhttps://www.blogger.com/profile/11810503425508055407noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-85048917791252508532015-03-30T08:30:17.290-07:002015-03-30T08:30:17.290-07:00What a lovely discussion. I hope that Alex C. can ...What a lovely discussion. I hope that Alex C. can extract the answer to his question because if it is in the deep structure of these comments I surely missed it; e.g. what does Norbert's elaboration on degrees of disappearance of DS have to do with continuity of one computational property during the the ST -- REST -- GB -- MG sequence? It IS of course curious that back in 1995 Chomsky celebrated the elimination of DS as a massive intellectual achievement of MP but that, apparently, in 2015 DS still has not been completely eliminated. In case Norbert wanted to draw our attention to this curious fact - THANK YOU.<br /><br />Now in this spirit of wandering wherever the flow of conversation might take us, maybe someone could elaborate exactly how one gets flexible constituency in MGs? AFAIK, MG work somehow mimics flexible constituency by movement+deletion operations. But that seems to involve a lot of arbitrary and unnecessary operations, so maybe instead of more alphabet soup [to borrow Alex's phrase] of abbreviations, you can explain how it works and why it is preferable? Lets just take a concrete example:<br /><br />[1] John offered Mary, and Robin gave Terry, a couple of tickets to the Budapest String Quartet performance.<br /><br />As familiar, one gets flexibility of constituency in CG by hypothetical reasoning. So 'Robin gave Terry' corresponds to a real constituent, with its own compositional semantics. So one is combining two constituents which are looking for an NP ('a couple of tickets to the Budapest String Quartet performance'). Nothing moves, nothing is deleted. That seems more elegant to me than all those additional operations required by MG - but I am no expert so maybe you can show us the detailed analysis of [1] and compare it to the CG analysis?<br /><br />Anonymoushttps://www.blogger.com/profile/03443435257902276459noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-10584619926048030192015-03-29T18:00:42.892-07:002015-03-29T18:00:42.892-07:00Chomsky, at least, believes that the reduction of ...<i>Chomsky, at least, believes that the reduction of movement and structure building to species of merge is an important innovation of Minimalism. From what I can tell, this unification has proven to be difficult to formalize in MG</i><br />There are three properties of Move that separate it from Merge in the standard definition of MGs:<br /><br />1) it involves different features,<br />2) it is a unary operation,<br />3) it must obey the Shortest Move Constraint.<br /><br />All of them are superficial.<br /><br />1) The distinction between Merge features and Move features can be dropped without changing anything about the licensed structures, it only affects how you have to write your lexicon to get a specific set of well-formed derivations.<br /><br />2) That Move is unary is just a matter of succinctness and simplicity, you can always make it binary like Merge, it just means that derivations are no longer trees but multi-dominance trees.<br /><br />3) We could define an analogue of the Shortest Move Constraint for Merge, and it would always be satisfied due to how Merge works.<br /><br />But all of this does not change the fact that an MG with Move is invariably more complicated than one that just uses Merge. Both the derivation trees and the mappings are more complex, both weak and strong generative capacity increase a lot, you need a very different type of memory to verify well-formedness, and parsing complexity increases; all as a result of Move introducing fairly intricate long-distance dependencies. And that basic fact is not due to some peculiarity of MGs, it also arises in the Collins & Stabler formalization, where you have to keep track of copies.<br /><br />Unification is a nice thing, but it's just as important to pay attention to the differences. Personally, I find it much more interesting that the increase in derivational complexity brought about by Move also occurs with unbounded recursive adjunction (adjunction to an adjunct of an adjunct of an...), which is something Chomsky has very little to say about (the pair merge story always struck me as uninspired). Even though adjunction is defined very differently, a cognitive architecture that can compute MGs with Move can also compute MGs with unbounded recursive adjunction, which I believe is a nice argument that adjunction comes for free. But that requires accepting first that Move is more complex than Merge.Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-33417762104023590672015-03-29T15:46:02.614-07:002015-03-29T15:46:02.614-07:00@Greg
The elimination of DS is not quite as compl...@Greg<br /><br />The elimination of DS is not quite as complete as you suggest, at least in the standard versions of the theory. True there is no complete segregation of DS operations prior to all movement. But there is a residue of this in that some operations (first merge to a theta position) precedes any other further I-merges of that element. The MTC tries to eliminate this distinction in allowing movement into theta positions. However, so far as I know every theory assumes that a DP begins its derivational life by merging in a theta position. Thus there seems to be a when per DP; first theta then whatever.<br /><br />It is also worth noting that what you are discussing is MG, not minimalism "in the wild." Chomsky, at least, believes that the reduction of movement and structure building to species of merge is an important innovation of Minimalism. From what I can tell, this unification has proven to be difficult to formalize in MG (I may be wrong here, but structure building and "movement" remain formally distinct operations in the stuff I've seen). <br /><br />As you know there are flavors of Minimalism that have also argued for unifying movement with construal, something else that GB did not do. So too with distinguishing case and theta heads (i.e. A head cannot theta mark and case "mark" the same expression). At least within the syntax community these are thought of as important differences between GB and minimalist theory. Their impact on formalism, however, may not be as clear. <br /><br />So, at least in the standard vision, DS has not entirely disappeared, what has gone is the idea, as Chomsky put it, that SATISFY (all theta roles discharged before any movement) is thrown out. This leaves a residue of earlier DS theory however. Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.com