tag:blogger.com,1999:blog-5275657281509261156.post8085845555513994512..comments2024-03-28T04:04:55.806-07:00Comments on Faculty of Language: Some recent Chomsky lectures on MinimalismNorberthttp://www.blogger.com/profile/15701059232144474269noreply@blogger.comBlogger53125tag:blogger.com,1999:blog-5275657281509261156.post-18223639803585670802014-06-17T12:22:12.253-07:002014-06-17T12:22:12.253-07:00Dennis:
Thanks for your attempt to clarify. Unfo...Dennis: <br /><br />Thanks for your attempt to clarify. Unfortunately, I'm more confused now than ever. Actually, I don't know if I don't understand or if I just disagree. When Chomsky proposed the notion of bare output conditions being the conditions that language must meet to be useable at all, didn't he mean to claim that the narrow syntax is free to generate whatever it will (this is the sense in which I understand there to be no well-formed formula: in the narrow syntax), and that those generated objets which meet BOC's form grammatical sentences, while those which do not meet BOC's form ungrammatical strings? In other words, I understand the "overgenerated" strings to be those which are generated in the narrow syntax but do not meet BOC's. If this is not the case, then I suppose I'm not sure what the importance of BOC's is anymore. (Just to be clear: when I talk about BOC's, I'm not talking about semantic interpretation; whether an object yields gibberish or some semantically coherent expression can only be evaluated once the object has undergone interpretation in the semantic component proper; but an object must obey BOC's to gain access to the semantic component in the first place.)<br /><br />Also, I repeat a question of Norbert's: If acceptability and grammaticality are distinct notions, why should the observation of gradience in acceptability judgements lead us to postulate gradient grammars?<br /><br />As always, I appreciate any insight you can provide.Anonymoushttps://www.blogger.com/profile/10403011742610922145noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-44997879350099340922014-06-17T07:45:41.036-07:002014-06-17T07:45:41.036-07:00No disagreement here, and of course I didn't m...No disagreement here, and of course I didn't mean to suggest that putting "PF" and "LF" stickers on phenomena is per se an explanation. But I think there are some plausible analyses that go this way, which is encouraging. Just a whole lot of work left to do.Dennis O.https://www.blogger.com/profile/07200488340449742505noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-66990282846607783212014-06-17T07:30:23.180-07:002014-06-17T07:30:23.180-07:00I think the issue is a methodological one: without...I think the issue is a methodological one: without a restrictive theory of the interfaces, nearly *anything* (well, except for Merge itself) can be relegated to the interfaces. One can then claim victory (i.e., that a very minimal UG has been achieved), but this move will have taught us very little – I dare say, nothing – about the human capacity for language. We have in effect stuck a "PF" or "LF" sticker on phenomena that still have no explanation.<br /><br />The price is steep (again, methodologically speaking): what "PF" looks like to modern-day syntacticians makes no sense to any morpho-phonologists to whom I have posed the question; similarly for what many syntacticians take to be "LF" requirements. (This is why I was careful, earlier, to say that agreement cannot be enforced using <i>Bare Output Conditions</i> – if you allow LF to impose the condition <i>"if there is a [plural]-bearing DP within the c-command domain of, and in the same phase as, T, then T must have agreed with that DP"</i>, then it certainly can be enforced "at the interfaces.")<br /><br />I would say that in practice, these 'relegations' to PF/LF often do more to impede research than to foster it.<br /><br />Omerhttps://www.blogger.com/profile/06157677977442589563noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-1980411099109656612014-06-17T07:11:32.853-07:002014-06-17T07:11:32.853-07:00Norbert: Last point, how do you understand the ide...Norbert: <i>Last point, how do you understand the idea that the grammar is the optimal realization of interface conditions if the later does not in some sense restrict the range of the former?</i><br /><br />I take "optimal realization" to mean something like the most minimal system that can satisfy interface conditions while being totally blind to them, i.e. generate expressions that end up useable. It may generate all kinds of unusable expressions, but it plainly has to generate those that are usable as well. And free Merge operating over the lexicon will give you an infinity of propositional expressions, but it will need to be supplemented with theories of interface mappings (at least PF) and, eventually, the outside systems accessing the resulting representations.Dennis O.https://www.blogger.com/profile/07200488340449742505noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-70012320557120839212014-06-17T07:09:54.943-07:002014-06-17T07:09:54.943-07:00@Dennis: You write, I should certainly read (more ...@Dennis: You write, <i>I should certainly read (more of) the stuff you mention, but when it comes to matters of logic (like the knowledge/competence vs. use/performance distinction) I fail to see how any empirical evidence could bear on it in principle. I mean, if you accept that there's one system, the grammar, that is a purely logical-derivational system and another bunch of systems, those involved in production, that operate in real time, then what does it even mean to say that those systems share certain properties?</i><br /><br />Again, (a portion of) this logic is exactly what I'm finding fault with. The logic is not a given – it is part of the linguist's hypothesis structure. If this logic puts us in a position where there are certain robust facts about the world (those things you call "informal" similarities) that we will never be in a position to explain, then the logic is the wrong one to pursue (as scientists; it might still be an interesting thought experiment from a philosophical point of view).<br /><br /><i>If I play a game and I have its logical structure/rules internalized, I will access that knowledge when I play the game. But that doesn't mean that the logical structure and my actions in playing the game are somehow equivalent -- they're quite distinct, but my behavior isn't random because I can make use of the knowledge I have.</i><br /><br />That last sentence is what I'm after: if you can <i>make use of the knowledge you have</i> while playing the game, then there must be a model of this knowledge that is implementable in realtime. That doesn't mean that the <i>only</i> way to represent this knowledge is using a realtime implementation; but if someone makes a proposal for the content of that knowledge which is fundamentally at odds with realtime implementation, then we know that that proposal is wrong – since, out in the world, people are "making use of that knowledge" (your words) in realtime.<br />Omerhttps://www.blogger.com/profile/06157677977442589563noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-26355976611181265322014-06-17T07:00:54.536-07:002014-06-17T07:00:54.536-07:00@Thomas: I think I see what you mean, but in the c...@Thomas: I think I see what you mean, but in the case of I-language we're dealing, by assumption, with different systems interacting. So it does make a difference *where* you locate the complexity (in the grammar or in the interfacing systems), esp. if you take evolutionary considerations into account. That's not just a matter of notation, although it may be from a purely formal point of view.<br /><br />So I'm not denying that you could restate everything in terms of features without increasing complexity, but it would still mean putting this stuff into UG rather than into other places that are hopefully in some meaningful sense "independently given." So while I agree with Norbert's assessment of how little we know, I think it's clear which route you want to go *if* you subscribe to the general idea of cutting down UG.Dennis O.https://www.blogger.com/profile/07200488340449742505noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-50243823380196115352014-06-17T06:54:22.169-07:002014-06-17T06:54:22.169-07:00@Omer: thanks, it seems like we've pinned down...@Omer: thanks, it seems like we've pinned down our disagreement (or, I guess I should say, the point where our intuitions diverge).<br /><br />You say, <i>But now, as I said before, there is positive evidence accruing that the realtime procedures respect islands, c-command, etc. (ask some of the other UMD folks commenting on this blog; they know far more about it than I do). And if Chomsky's ontological choices make this comparison necessarily informal, then in my view, that's simply another strike against those ontological choices.</i><br /><br />That's one way of looking at it. I should certainly read (more of) the stuff you mention, but when it comes to matters of logic (like the knowledge/competence vs. use/performance distinction) I fail to see how any empirical evidence could bear on it in principle. I mean, if you accept that there's one system, the grammar, that is a purely logical-derivational system and another bunch of systems, those involved in production, that operate in real time, then what does it even mean to say that those systems share certain properties?<br /><br />Also, <i>Lastly, I think the notion of "a system of competence (accessed by systems of use but distinct from them)" is incoherent if you think the systems of use can operate in realtime but the system of competence cannot (if the operation of SoU involves a step where SoC is accessed, and SoU operates in realtime, then at least that part of SoC that is accessed by SoU must also be able to operate in realtime).</i><br /><br />I don't see what's incoherent about the idea that production/perception systems access systems of knowledge. If I play a game and I have its logical structure/rules internalized, I will access that knowledge when I play the game. But that doesn't mean that the logical structure and my actions in playing the game are somehow equivalent -- they're quite distinct, but my behavior isn't random because I can make use of the knowledge I have. And this certainly doesn't imply that the logical structure is somehow instantiated "in real time" (except in the sense of "being there in my head in that moment").Dennis O.https://www.blogger.com/profile/07200488340449742505noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-86202456676653020192014-06-17T06:51:11.685-07:002014-06-17T06:51:11.685-07:00@Dennis
Aren't you conflating acceptability wi...@Dennis<br />Aren't you conflating acceptability with grammaticality here? Your words, relevant part between *s:<br />"So that's a straightforward [+/-grammatical] distinction: there's a set of expressions the grammar generates, those are [+grammatical], and anything else is ungrammatical, i.e. not generated. Chomsky denies that there is such a distinction for natural language, or at least a meaningful one, *since expressions can be more or less acceptable along various dimensions*, acceptable in some contexts/registers but unacceptable in others, etc. So for natural language there's no notion of well-formed formula…"<br /><br />This looks like it assumes that because utterances vary in acceptability that a grammar must eschew a notion of grammaticality. But does this follow? We know that one can get continuous effects from discrete systems (think genes and heights). So the mere fact that acceptability is gradient does not imply that grammaticality is too.<br /><br />There are several factors you mention. Registers: but of course we know that people have multiple grammars, so we can say it is grammatical in one but not another. You also mention other factors: but this does not mean that a sentence may not be +/-grammatical, just that grammaticality is one factor in gauging acceptability. And we have tended to think that it is a non-negligible factor so that acceptability was a pretty good probe into grammaticality. And given the success of this method, this seems like a pretty good assumption, though there are interesting cases to argue over.<br /><br />What I think you highlight, which is interesting, is that Chomsky has made a stand against thinking that all forms of unacceptability implicate the grammar. He did this before, of course (think "colorless green…"). But he seems to want to expand this yet more now. I am not really sure where he wants to draw the line and I can imagine that there is no principled line to draw. It's an empirical matter as they say. However, his current work clearly requires that some structures are ungrammatical and some are not. He believes that some unacceptability is not due to ungrammaticality but due to something else, e.g. a gibberish interpretation at the interface. So far this is the colorless-green strategy. Do you think that there is more?<br /><br />BTW, I never understood why Chomsky thought we needed gradient GRAMMARS. Can you explain why? I can see that the combo of Gs and other things yield gradient judgments. But why gradient grammars? Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-69305587153820517512014-06-17T06:44:42.474-07:002014-06-17T06:44:42.474-07:00Norbert: Thanks for clarifying, your comparison of...Norbert: Thanks for clarifying, your comparison of B's and C's stories is helpful. Seems to me that C's story relies on the premise that you want the interrogative clause to be locally selectable, hence you need to identify it by labeling. I agree this is a strong assumption, but one that strikes me as prima facie way more plausible than distributing vacuous intermediate movement triggers. But I guess this is where it comes down to theoretical intuitions, and the account would need to be worked out much more.<br /><br />However, for the sake of the argument let's assume C's and B's theories have the same "empirical coverage." Then don't you think that C's story is still preferable, since it implies no enrichment of UG? The assumption is that clause-typing/selection are conditions imposed by C-I, so the syntax need not know anything about them. But B's model needs a syntax that is sensitive to and constrained by trigger features, deviating from simplest (= free) Merge. This is where I see the general conceptual advantage of interface-based explanations, although of course at the end of the day you want them in turn to be grounded in theories of the interfacing systems. And I think it would be premature to dismiss such approaches just based on the fact that we do not yet have those theories in place.Dennis O.https://www.blogger.com/profile/07200488340449742505noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-52742293427386906482014-06-17T06:34:46.766-07:002014-06-17T06:34:46.766-07:00Brandon (sorry for the delay): well, in formal lan...Brandon (sorry for the delay): well, in formal languages where you stipulate the syntax, you have well-formed formulae (anything that abides by the syntactic rules you stipulated) and everything else (those formulae that don't). So that's a straightforward [+/-grammatical] distinction: there's a set of expressions the grammar generates, those are [+grammatical], and anything else is ungrammatical, i.e. not generated. Chomsky denies that there is such a distinction for natural language, or at least a meaningful one, since expressions can be more or less acceptable along various dimensions, acceptable in some contexts/registers but unacceptable in others, etc. So for natural language there's no notion of well-formed formula, or at least this is not what we're probing when we ask people for acceptability judgments (where a host of other factors come into play besides grammar). But the notion of "overgeneration" presupposes precisely that there is a notion of well-formed formula -- if you generate formulae that aren't in the language (= set of sentences), you "overgenerate." But in linguistics people typically, and mistakenly, use apply the term to analyses that predict/imply the generation of "deviant" forms. This is at best a very informal notion of "overgeneration," and not one that is defined technically, *unless* (and that's the fallacy) you equate grammaticality and acceptability. Same for crash-proof grammars: as far as I can see, the goal of these is to generate "only what's acceptable," as though this were equivalent to the notion of well-formed formula in formal language theory. Even if this were a coherent goal (which I don't think it is, since acceptability is not a defined technical notion), it would be empirically dubious, since, as Chomsky has emphasized, we want the grammar to generate all kind of deviant expressions which have perfectly coherent interpretations, and may even be used deliberately in certain contexts.<br /><br />Does this make sense?Dennis O.https://www.blogger.com/profile/07200488340449742505noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-58743159821469759142014-06-11T14:51:31.612-07:002014-06-11T14:51:31.612-07:00Dennis,
To Omer's statement that "The Bar...Dennis,<br />To Omer's statement that "The Bare Output Conditions model requires (potentially massive) overgeneration of possible derivations, followed by filtration of those derivations whose outcomes do not meet the relevant conditions", you responded that "If you follow Chomsky and drop the idea that there is a significant notion of "well-formed formula" for natural language, then the term "overgeneration" has no real meaning" and that ""overgeneration" and conversely "crash-proof (grammar)" all become pretty much meaningless notions". I wonder if you could elaborate a bit on this. I'm not sure I understand how "overgeneration" becomes meaningless.<br />BrandonAnonymoushttps://www.blogger.com/profile/10403011742610922145noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-22975510169421530152014-06-11T14:33:17.177-07:002014-06-11T14:33:17.177-07:00@Dennis: Yes, but that's different from interf...@Dennis: <i>Yes, but that's different from interface conditions imposed by biological systems embedding the I-language</i><br />My point is a technical one: Let's assume, as you do, that there's a set C of constraints that hold at the interfaces. As long as these constraints fall within a certain complexity class, they can be automatically translated into syntactic constraints over derivations, which in turn can be encoded in terms of feature dependencies.<br /><br />This class of feature-reducible constraints includes all the examples you give above. The only constraints in the literature that fall outside are those that invoke identity of meaning, but even there it's not clear-cut (Scope Economy, for instance, is fine if we do not care about actual change of meaning but just about whether the meaning of the sentence could in principle be altered by QR, which is what Fox actually argues for). So overall features and constraints can do the same work, but they do it in different ways and thus one of the two might be more suitable for certain tasks.<br /><br />At any rate Norbert already provided the money quote: <i>the problem is not whether to code what we know in terms of features or in terms of conditions, but an admission that we don't know enough to make use of these notions particularly enlightening.</i> I find that admission very refreshing, but I have the impression that a fair share of syntactic work nonetheless wrestles with such matters of notation with the agenda to prove one superior.Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-36588933235221636422014-06-10T13:20:17.202-07:002014-06-10T13:20:17.202-07:00@Dennis
B's theory requires movement until fea...@Dennis<br />B's theory requires movement until features of wh are discharged. Then it can move no more. Chomsky's theory is that Wh moves until criterial agreement occurs and then there is not more movement. This seems to be the very same idea, one feature based, one BOC based. The feature story is unmotivated. But so far as I can tell so is the BOC account as it relies on a very strong (and unmotivated) assumption about clause typing being required for CI interpretation. Moreover, the freezing of the hw after criteria have been checked is based on considerations having to do with interpret ability that I frankly do not understand. So, where the two theories make claims, they seem to be more or less the Sam claim. Both theories, of course, have problem coping with all the variation we find in wh movement <br />In cases of multiple interrogation. The variation is very hard to model given either assumption.<br /><br />So, is one better than another. Right now, they are pretty interchangeable. This said, I agree that WERE one able to find defensible, non trivial BOCs that would be very nice. But then were one able to find non trivial defensible features, that would be too. At the level of abstraction we are discussing things, I think the biggest problem with both views is how little the assumptions made have any independent plausibility.Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-30410867670553523752014-06-10T11:37:40.940-07:002014-06-10T11:37:40.940-07:00Indeed, I don't see how unmotivated interface ...<i>Indeed, I don't see how unmotivated interface conditions is any better or worse )or different) than unmotivated features. That's my problem.</i><br /><br />No disagreement here. My feeling though is that stories based on featural triggers for Merge are rarely insightful, and I find it hard to see how they could be. (I'm not talking about theories of agreement or other operations where features are uncontroversially involved, I'm referring only to "triggered Merge" models that take operations like structure-building, deletion etc. to be contingent on formal features.) By contrast, the little bit of work there is that explores alternative notions of "trigger" (in terms of interface effects) looks much more promising to me, although naturally lots of problems remain.<br /><br />But I think the choice between "triggered Merge" theories and interface-based theories is not just a matter of taste. At least implicitly, "triggered Merge" approaches often rest on the assumption that the goal of the theory is to model a "crash-proof" system, an idea that I believe rests on the mistaken equation of grammaticality and "acceptability." In interface-based theories considerations of acceptability take a back seat, shifting the focus of investigation to the question of what consequences syntactic operations have when it comes to interpretation and externalization of the resulting structure. So while I agree with you that either approach must be evaluated a posteriori based on its merits, I think that the two approaches differ, a priori, in terms of what they take the theory to be a theory of. (As always, the truth may well lie somewhere inbetween. There's an interesting paper by Biberauer & Richards that proposes a model in which obligatory operations are triggered by features whereas others apply freely, the latter licensed indirectly by their effect at the interfaces.)<br /><br />As for successive-cyclic movement, I was merely refering to the general idea that you're always free to move to the edge, but if you don't, you're stuck; what Chomsky adds is that you can't stay in a non-final edge, since that will mean that the higher predicate's complement is an unidentifiable {XP,YP} structure. Requiring labels for purposes of ensuring locality of selection is the one place where they make some sense to me, intuitively at least. I'm curious, what "virtues and vices of Boskovic's story" do you have in mind?Dennis O.https://www.blogger.com/profile/07200488340449742505noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-5336091934795656092014-06-10T11:35:08.750-07:002014-06-10T11:35:08.750-07:00This comment has been removed by the author.Dennis O.https://www.blogger.com/profile/07200488340449742505noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-88654728034493561102014-06-10T11:15:17.824-07:002014-06-10T11:15:17.824-07:00I replied to this above. Filter here is used very ...I replied to this above. Filter here is used very non theoretically; it's whatever it is that explains our data without doing so on the generation side of the grammar. Btw, I've never really understood what the interpretive options are given that Chomsky has been loath to specify what the interpretive system does. But this may not be a fair criticism, as nobody has a good idea about this. However, if I want to say the something converges with a gibberish interpretation rather than not being interpretable at all, it would be nice to have some canonical examples of how this distinction is meant to be taken.<br /><br />Last point, how do you understand the idea that the grammar is the optimal realization of interface conditions if the later does not in some sense restrict the range of the former?Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-55716859002221659142014-06-10T11:10:47.112-07:002014-06-10T11:10:47.112-07:00@dennis
Hmm. If we are talking about bets here, we...@dennis<br />Hmm. If we are talking about bets here, we migh be putting our money in different places. I personally think that we will get a lot more from considering how the computational system works than thinking about how the interfaces interpret. I am also not particularly impressed with Chomsky's treatment of binding or theta theory. The latter, to the degree it says anything about CI amounts to a restatement of the principle of full interpretation, which, as I recall, you're not impressed with. As for binding theory, virtually none if it's properties follow from anything Chomsky has said. Whey the locality and hierarchy conditions? He says that this is natural, but come on, really? Nowadays he ties locality and hierarchy to probe goal dependencies between antecedent relations to heads they agree with and which in turn probe anaphors. But this doesn't explain anything. It just restates the facts. I could go on, but the real point is that we need concrete examples that do real work before deciding on how reasonable this approach is. I'm waiting for affix hopping! Till I get this, I'll stick to my view that the idea is coherent but so under specified as to be currently little more than a poetic hint of something that MIGHT be worthwhile if anyone figures out how to make it concrete.Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-75168178636411185122014-06-10T11:00:33.718-07:002014-06-10T11:00:33.718-07:00I am not saying it discredits it. I am saying that...I am not saying it discredits it. I am saying that there is no real approach there yet. There will be one when we know something about CI and what demands it makes on derivations. <br /><br />Btw, I use 'filter' in a loose sense. It explains some data we are interested in without doing so by restricting generation. <br /><br />Last point: just to touch base on Chomsky's current view: my problem with it is that it makes an ad hoc assumption about labels/features and CI requirements. I do not see why agreement is required for interface reasons. Do we really need to know that a sentence is a question in addition to knowing that a certain operator is WH? What of agreement? Is this required for interpretation? Moreover, this approach to successive cyclicity, at least to me, has all the virtues and vices of Boskovic's story. So until I hear of some independent evidence for Chomsky's interface condition that forces agreement on pain of CI uninterpretability (if that's the real proble, I'm never sure here as Chomsky doesn't say what goes awry if the agreement fails) then I am not going to be more impressed with this sort of story than one that just keeps adopting new features. Indeed, I don't see how unmotivated interface conditions is any better or worse )or different) than unmotivated features. That's my problem. It's not conceptual at all.Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-39291601069651391072014-06-10T10:55:49.261-07:002014-06-10T10:55:49.261-07:00@Dennis: Yes, you've zeroed in on it. I indeed...@Dennis: Yes, you've zeroed in on it. I indeed reject Chomsky's views on this matter (e.g. what you quoted/paraphrased from the Atlantic). This used to be motivated, for me, for the usual scientific-method reasons (i.e., if realtime systems and the "competence grammar" actually do share some (or all) of their subsystems, you'd never discover it if you started out with the assumption that they're separate, e.g., "it's incoherent to impute to [the grammar] a procedural/algorithmic interpretation").<br /><br />But now, as I said before, there is positive evidence accruing that the realtime procedures respect islands, c-command, etc. (ask some of the other UMD folks commenting on this blog; they know far more about it than I do). And if Chomsky's ontological choices make this comparison necessarily <i>informal</i>, then in my view, that's simply another strike against those ontological choices.<br /><br />Lastly, I think the notion of "a system of competence (accessed by systems of use but distinct from them)" is incoherent if you think the <i>systems of use</i> can operate in realtime but the <i>system of competence</i> cannot (if the operation of SoU involves a step where SoC is accessed, and SoU operates in realtime, then at least that part of SoC that is accessed by SoU must also be able to operate in realtime).<br /><br />Omerhttps://www.blogger.com/profile/06157677977442589563noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-81593410781510656322014-06-10T09:30:21.843-07:002014-06-10T09:30:21.843-07:00Sorry, Norbert, I missed your earlier comment. You...Sorry, Norbert, I missed your earlier comment. You wrote, <i>Last point: I agree that your version of the SMT is a prevalent one. And that's my problem with it. It's not a thesis at all, not even an inchoate one, until one specifies what the CI interface is (what's in it, e.g. how many cognitive modules does FL interact with?), what properties it has (what properties do these modules have?), and how grammatical objects get mapped to these. Only when this is done do we have a thesis rather than a feel good slogan. And, that's why I like other versions of the SMT more: they provide broad programs for investigation, one's that I can imagine being fruitfully pursued. They may be wrong, in the end, but they point in clearish research directions, unlike many of the versions of the SMT that I am familiar with. So a question: why are parsers, visual system, learners NOT considered part of the interfaces that FL interacts with? Why should we not consdier how FL fits with these? Or, put another way: what interface modules "count" as relevant to SMT and what not?</i><br /><br />You may interpret this as dodging the question, but here I'm with Chomsky: we have to find out what the interfacing systems are and what constraints they impose as we proceed. But it's not like we have no idea what their effects are: we have things like Binding Theory, Theta Theory, etc. after all. And I'm with you that we should take the basic generalizations on which these theories rest seriously and to be too good to be entirely false (although our optimism is orthogonal to the issue). The task, as I see it, is to refine and restate, in a more principled fashion, these putative "modules of grammar" in terms of interface requirements, kind of like what Chomsky & Lasnik tried to do for Binding Theory in their 1993 paper. How is this a less clear or coherent research guideline than the more traditional one that seeks capture the complexity in terms of syntax-internal constraints, as you imply?<br /><br />Incidentally, I don't think this is the prevalent interpretation of SMT at all, at least not in practice. There's very little work, as far as I'm aware at least, that actually tries to do this.Dennis O.https://www.blogger.com/profile/07200488340449742505noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-73350122231568000942014-06-10T09:15:28.816-07:002014-06-10T09:15:28.816-07:00Omer, you wrote: So here is a choice point: either...Omer, you wrote: <i>So here is a choice point: either there is a second system, G', that is usable in realtime and mimics (to a non-trivial degree) the effects of the grammar G; or the two are one and the same. One option is to maintain that G and G' are distinct, in which case the burden is to explain why their effects are so darn similar. The other option is to accept that G can be used in realtime, in which case the price is that we can no longer use the refrain you invoked to free ourselves entirely of considerations of realtime computation. I choose option two.</i><br /><br />If option two just means acknowledging that when engaging in (linguistic) behavior we're putting to use our (linguistic) knowledge/competence, then I agree; the two are trivially related in this sense. But the systems must be fundamentally different, as a matter of logic, if we want to maintain that the grammar is a mental representation of the speaker's linguistic knowledge, whereas other processes operate in real time to algorithmically assign meanings to sounds based on what the grammar identifies as licit sound-meaning pairs. I'm not sure what you mean when you say that production/comprehension "mimics (to a non-trivial degree) the effects of the grammar" or that "their effects are so darn similar;" the crucial point to me is that no such comparison can be more than informal, given that "operations" in the grammar have no procedural dimension (just like steps in a proof; I think Chomsky has used this analogy), whereas production/comprehension systems are necessarily procedural. So if option two means likening real-time processes to purely logical operations (steps in a derivation or whatever), then I don't see how this could possibly be stated coherently, without conflating logically distinct dimensions. And consequently, there's no burden attached to distinguishing the systems, since it's a matter of necessity.<br /><br />The Atlantic had an interesting interview with Chomsky a while ago (it's online), where at some point he says that I-language "has no algorithm," it's just abstract computation, so it's incoherent to impute to it a procedural/algorithmic interpretation. Interestingly, a recent paper by Sprouse & Lau ("Syntax and the Brain") explicitly denies this right at the outset, stating that the processor is simply I-language at the algorithmic level. So Sprouse & Lau in effect view I-language as an input-output system whereas Chomsky takes it to be a system of competence (accessed by systems of use but distinct from them), and this seems to be just our disagreement here.Dennis O.https://www.blogger.com/profile/07200488340449742505noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-18998414615555313362014-06-10T08:48:52.238-07:002014-06-10T08:48:52.238-07:00You say, Here's where I think we might differ:...You say, <i>Here's where I think we might differ: to date I see almost nothing that tells us anything about BOCs. You note the principle of full interpretation is unprincipled. Well, can you think of any useful BOC at all if even that one is gone? (You mention Fox and Reinhart: but if look-ahead bothers you (and it does me) then comparing derivations wrt interpretations is anything but efficient). We know so little about features of the CI interface that invoking it as an explanatory construct is just as bad (maybe worse) than a general reach to features.</i><br /><br />I don't think this is a fair criticism: there is indeed very little work trying to pin down the precise nature of interface conditions, but I think the reason for this is precisely that most people in the field have been and continue to be too content with rather shallow "explanations" in terms of unmotivated features (sometimes supplemented with a caveat that the features are really "shorthand" for something else, but this just begs the question). Speaking from personal experience, I have been urged more than once now by reviewers to patch up some open issue in an analysis with an invented, arbitrary feature rather than simply leaving it open. The descriptive urge is strong, and I think also drives the whole cartographic movement, which continues to be much more popular than the alternative approaches I adumbrated.<br /><br />I agree with you that the Fox-Reinhart take on QR has issues concerning look-ahead (if I remember correctly, Fox explicitly admits and acknowledges this). But this is not necessarily true of interface-based explanations per se. Take for instance the Moro-Chomsky approach to {XP,YP} structures requiring movement to be linearizable/labelable; this requires no look-ahead, but it does mean that there will be failing derivations (those in which no symmetry-breaking movement applies, at whatever level this is detected). I don't know how plausible this is overall, but even Chomsky's sketch of an explanation of successive-cyclic movement in these terms strikes me as more principled than any account I've seen that relies on featural triggers for intermediate movement steps. There are also theories of scrambling that don't assume "scrambling features" and the like, but instead free movements whose information-structural effects are determined by mapping rules (Neeleman & van de Koot). I understand that in Tromso-style nanosyntax, certain movements serve to derive lexicalizable subtrees; but those movements are necessarily blind to such needs arising "later on," so they end up being obligatory but crucially without any look-ahead (Chomsky has a really good discussion of why optional operations should not be misunderstood as applying teleologically in "Derivation by phase"). All you need to accept is that there are failing derivations, which -- it seems to me -- is entirely unproblematic, once misunderstandings concerning competence/performance and acceptability/grammaticality that we've addressed here are cleared up.<br /><br />So I agree that we know little about interface conditions, but I fail to see how that discredits the approach. I also don't see what "basic results" have emerged from the feature-driven framework, unless by that you mean (undoubtedly important) empirical generalizations. I have yet to see a case where that framework provides a genuine explanation for a real problem, rather than just a puzzle disguised as an answer.Dennis O.https://www.blogger.com/profile/07200488340449742505noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-31733827475458126842014-06-10T08:48:16.053-07:002014-06-10T08:48:16.053-07:00Norbert: sorry for the delay, and also sorry for m...Norbert: sorry for the delay, and also sorry for mistakenly placing you in the featural-triggers camp; at least in the 2005 textbook you seemed to me to be subscribing to this general world view, but perhaps this was in part for pedagogical reasons and/or doesn't reflect your current views.<br /><br />You say, <i>There is lots of probing and goaling and none of this makes sense, at least to me, in the absence of features. From what I can gather, Chomsky still likes this way of putting things, which maybe is what you mean by "ambivalent."</i><br /><br />That is indeed what I meant, but I think in Chomsky's case the main reason why there's still lots of probing and goaling is that he sticks to classical Case Theory. Drop that and the role of features is diminished significantly. And note his use of features in the lectures you posted: they determine to some extent where some element can or cannot end up, but they don't really trigger anything. His discussion of the "halting problem" (*<i>Which dog do you wonder John likes t?</i>) was most explicit on this: instead of adopting Rizzi's solution he argues that you can permit "illicit" movement in such a case, since it won't yield a coherent interpretation. The features are implicated in that they're relevant to labeling, but they don't trigger or block anything. I'm not saying that's the correct or ultimate explanation, but the general spirit seems to me to point in the right direction. And note that if something like this is correct you *want* the syntax to be able to carry out the (eventually) illicit operation, for otherwise you'd be replicating in the syntax what's independently done at the interface.Dennis O.https://www.blogger.com/profile/07200488340449742505noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-25108530453819731982014-06-10T07:38:55.256-07:002014-06-10T07:38:55.256-07:00Thomas: Dennis said that in [Formal Language Theor...Thomas: <i>Dennis said that in [Formal Language Theory], there are no interface systems/conditions. Well, there are. There's an entire subfield of formal language theory that's concerned with the generative capacity of logically defined constraints.</i><br /><br />Yes, but that's different from interface conditions imposed by biological systems embedding the I-language. There is no reason to assume, as far as I can see, that the latter correspond to logical constraints on formal systems. I remember Chomsky talking about vacuous quantification, which seems to me to be a good illustration of this. It's really hard to assign an interpretation to something like "Who did John kiss Mary?," which in logical/formal terms is unproblematic. But C-I (or whatever you want to call it) doesn't permit it. Or take thematic roles, imposed on interpretation by C-I in a way that has little to do with logical constraints, but presumably with the format of "events" defined by C-I. And so on.<br /><br />Norbert: <i>movement is "free" and outputs are "acceptable" if they gain interpretation at the CI interface.</i><br /><br />I think this is a misleading way of putting it. Rather, expressions have whatever interpretation they have at the interfaces, including deviant and nonsensical interpretations, or perhaps no coherent interpretation in the extreme. This is conceptually quite different from a "filtering" system implementing some notion of "acceptability" in terms of "reject" or "admit."Dennis O.https://www.blogger.com/profile/07200488340449742505noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-35366227048596836032014-06-09T09:21:44.494-07:002014-06-09T09:21:44.494-07:00@Thomas:
The issue as I see it is not features ver...@Thomas:<br />The issue as I see it is not features versus constraints but which features and which constraints, implement as you will. The original MP conceit was that movement was forced, driven by feature checking (as in MP formalizations). However, there is another idea: movement is "free" and outputs are "acceptable" if they gain interpretation at the CI interface. Doing this means "fitting" with the system of meanings that live at CI. There is a problem with these views: re features, they are too cheap and hence lack explanatory power. Re Bare Output Conditions (CI conditions) we know virtually nothing about them. This also reduces their explanatory efficacy.<br /><br />So, the problem is not whether to code what we know in terms of features or in terms of conditions, but an admission that we don't know enough to make use of these notions particularly enlightening. And for that there is no formal fix, so far as I can tell.Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.com