tag:blogger.com,1999:blog-5275657281509261156.post1192600235095971386..comments2024-03-28T04:04:55.806-07:00Comments on Faculty of Language: Derived objects and derivation treesNorberthttp://www.blogger.com/profile/15701059232144474269noreply@blogger.comBlogger24125tag:blogger.com,1999:blog-5275657281509261156.post-87923405542620083822016-02-23T17:35:42.095-08:002016-02-23T17:35:42.095-08:00@Omer: Yes, it captures pretty much the same intui...@Omer: Yes, it captures pretty much the same intuition. Here's several ways of paraphrasing it: 1) Traversing downwards from the Move node towards the mover, you may never cross through the slice of a potential mover. 2) If incremental search along the complement spine leads you towards a potential mover, the derivation is considered ill-formed because the actual mover is further away then it could have been. 3) Among all potential movers for a Move node, the derivationally most prominent one has to be the actual mover.Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-58630501598743582102016-02-23T12:31:04.521-08:002016-02-23T12:31:04.521-08:00@Thomas: good, so I see there's no problem per...@Thomas: good, so I see there's no problem per se in stating minimality over derivation trees. So we can reject the (faulty) premise that no two goals ever simultaneously bear the features sought by the probe, and we still have a way of getting minimality effects. This is encouraging.<br /><br />So now we come to the question of naturalness. One of the nice properties of minimality when stated over PMs is that it can be understood as a side effect of iterative search. (The search in question is neither breadth-first nor depth-first, but rather something like searching along the spine of the tree. Each iteration consists in asking "does the spec match my needs? if not, does the complement match my needs?" and if the answer is still no, repeating this step with what-was-the-complement as the new domain of evaluation.)<br /><br />The question is whether the following (quoted from your comment above) has a similarly natural interpretation:<br /><br /><i>- z and y are distinct, and<br />- x properly dominates z, and<br />- the slice root of z properly dominates y, and<br />- z is a potential g-mover</i><br />Omerhttps://www.blogger.com/profile/06157677977442589563noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-35759178085778188722016-02-23T09:39:13.724-08:002016-02-23T09:39:13.724-08:00There is no way to do without a specification of t...There is no way to do without a specification of the current derivational state. Everyone has some object that serves that purpose. The question of whether PMs are useful will only make sense if we specify what a PM is. (Otherwise we could just use the label PM for the whatever-it-is that encodes current derivational state.)<br /><br />At least implicitly, a PM is often assumed to be something that has words in it (for example, as the leaf nodes in a tree). Let's put the stake in the ground there. If we do that then I think it's clear that PMs do contain redundant information, since all the applicability of derivational operations cares about is the categories of these words, not the identity of the words themselves (their pronunciation, etc.); that's virtually the definition of what a "category" is.<br /><br />I think the inclination to assume that PMs have complete words in them stems from the idea that the PM one arrives at at the end of a derivation serves as the basis for interpretation at the interfaces --- in addition to serving as the basis for determining which derivaitonal operations are applicable. If you assume this, then your PMs have to do more than encode current derivational state, and yes you will obviously need the full PF/LF information about each word. So the question of "how much info you need to carry forward" depends on whether you're carrying it forward for operation-applicability purposes only, or for that plus interface interpretation purposes. When we think in derivation tree terms, however, I think it is almost always assumed that the derivation itself is interpreted, rather than the final derived object. In other words, it's a system that has the same shape as something like CCG: the thing that encodes derivational state is just the categories like NP and S\NP, and the pronunciation and meaning of the larger constructed object is composed step by step in sync with the derivational operations themselves, not by stepping through some tree-shaped derived object.Tim Hunterhttps://www.blogger.com/profile/11810503425508055407noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-75353857387562355642016-02-23T09:34:25.824-08:002016-02-23T09:34:25.824-08:00Bugfix: the derivational definition of the AIC abo...Bugfix: the derivational definition of the AIC above is missing one clause.<br /><br /><i>- x properly dominates the slice root of l.</i>Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-87442135155772389262016-02-23T09:31:28.328-08:002016-02-23T09:31:28.328-08:00Let's look at Relativized Minimalist as anothe...Let's look at Relativized Minimalist as another example.<br />For the sake of simplicity, let us reduce the condition to the following: x may not move across y to the specifier of z if y could move to z. This is partially handled by the Shortest Move Constraint in MGs as you cannot have both x and y carry the feature that would allow them to move to z. But MGs do allow x to move if y has no such feature, and the other way round. So here's how you patch this loop hole in two steps:<br /><br /><i>Let l be a lexical item with phonetic exponent pe and the string of feature f_1 ... f_n. Then l is a potential g-mover iff there is some lexical item l' with phonetic exponent pe and feature string f_1 ... f_i g f_{i+1} ... f_n.</i><br /><br />That's just a very roundabout way of saying that l could in principle carry movement feature g. And now we enforce the simplified version of Relativized Minimality introduced above:<br /><br /><i>If x is a g-occurrence of y, then there is no z such that<br /><br />- z and y are distinct, and<br />- x properly dominates z, and<br />- the slice root of z properly dominates y, and<br />- z is a potential g-mover<br /></i><br /><br />Not particularly shocking either. You might say that the switch between dominance and dominance by slice root is inelegant, but it can all be boiled down to the concept of derivational prominence. And of course all of this would be more intuitive if we looked just at the trees instead of mucking around with definitions; but since succinctness was an issue, I figured a more technical approach would provide better evidence that there is no real difference in that respect.Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-87834910971991843602016-02-23T09:28:01.185-08:002016-02-23T09:28:01.185-08:00@Norbert: Alright, let me answer the call. Just a ...@Norbert: Alright, let me answer the call. Just a quick aside first: I didn't say there's hostility, just hesitation, which is surprising because at first glance derivation trees and multidominance trees almost seem like notational variants. The latter are fairly run-of-the-mill, so one would expect the former to be met by "yeah sure, why not" rather than "I don't know, this feels strange". But since that's not really the case with you anyways, let's move on.<br /><br />My hunch is that your query is best answered by a concrete example. Let's take the Adjunct Island Constraint: no phrase may move out of an adjunct. That's a little vague, so here's a more rigorous, GB-style version: <br /><br /><i>For all nodes x and y, if x and y are members of a movement chain, then there is no node z such that<br /><br />- z is the maximal projection of a head h, and<br />- the phrase projected by h is an adjunct, and<br />- x c-commands z, and <br />- z properly dominates y, and<br />- y is not a projection of h.<br /></i><br /><br />I do not define movement chain here, because that's a mess over PMs if you have remnant movement (see e.g. the Collins and Stabler paper).<br /><br />Or one more in the vein of Minimalism:<br /><br /><i>Let P be some phrase marker containing a potential mover M. Then Move(P,M) is defined only if M is not properly contained by an adjunct.<br /></i><br /><br />Once again I leave out a definition here, this time it's proper containment. But it's pretty similar to the clauses 1, 2, 4, and 5 above, so the two definitions differ little in complexity once you pin down all the terms.<br /><br />Now let's look at the derivational equivalent. First two terminological remarks: a Move node x is an occurrence of y iff x checks a feature of y, and the slice root of a lexical item l is the highest node in the derivation that is licensed by some feature of l (this is the derivational equivalent of the highest node projected by l in the PM).<br /><br /><i>For all nodes x and y such that x is an occurrence of y, there is no lexical item l such that<br /><br />- l is an adjunct, and<br />- l and y are distinct, and<br />- the slice root of l properly dominates y.<br /></i><br /><br />For a full definition we would also have to define adjunct in all three definitions. But the bottom-line is that I do not see much of a difference between any of these definition. Once you have to clearly spell out your terms, they all boil down to checking that movement paths are not allowed to cross adjunct borders.<br /><br />And this is the story for pretty much all constraints over PMs that I am aware of. They aren't any harder to state over derivation tees because MG derivation trees --- in contrast to, say, derivation trees in Tree Adjoining Grammar --- are structurally very similar to PMs despite removing some of the structural fat, as Tim points out.Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-35148772861716638952016-02-23T07:30:11.156-08:002016-02-23T07:30:11.156-08:00Ok, we are on the same page here. What we want to ...Ok, we are on the same page here. What we want to know is what OMish information is required and how to make it available. So PMs are succinct ways of coding the "current derivational state" (thx for the terminology). And yes, full PMs are too rich, hence the pause/subjacecny concerns. But, if I read your point correctly, we agree that PMs DO add something useful, a specification of the current derivational state. Hence PMs help determine the "next" step takable to expand the derivation tree. If this is so, they are very useful objects, or at least the info contained therein is. The question remains whether the info they carry is redundant, or is easily recoded in derivation tree terms. Thx, and whatever you are busy with, good luck.Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-79617772109370924032016-02-23T07:26:05.317-08:002016-02-23T07:26:05.317-08:00@Thomas: First off, not a bad summary. Second, I ...@Thomas: First off, not a bad summary. Second, I don't think that there is any hostility to viewing things in terms of derivation trees. At any rate, I am not averse. What I want to get clear about is whether the things that we exploited PMs for can be easily transferred to derivation trees. If so, then great, let's use them as the basics. This won't even change practice much for if PMs are redundant, as they are for the mapping, then they will just be useful ways of thinking of things for those who think pictorially. Again, like Feynmann Diagrams. So, there is no general hostility, just curiosity about the details.<br /><br />So, back to the main point: why do GGers like PMs? Because we have observed that the rules that Gs exploit are structure dependent and PMs are a way of making the relevant structure explicit. So, minimality, islands, binding domains, headedness, c-command etc are all easy to code in PMs. Given that we think that these are key notions, then we want something that will allow for their easy coding. My query was to provoke those who knew to instruct us all about how we might do this without PMs. I noted that this seemed like a legit question given that the only interesting condition I have seen coded (though I admit to not being expert in any of this, hence the post) viz. Minimality, does this by sidestepping the problem, rather than coding the condition. Minimality does not arise in Stabler MGs, as you know. Why not? I have no idea. Would it be easy to code? I have no idea. How would it compare to a PM coding? I have no idea. But, I do know that unless it is easy to code and does so in a natural way, then this is a problem for a view that entirely dumps PMs. So, this was all by way of a question, with some indications of why I take the question to be interesting.<br /><br />You note that succinctness and naturalness are all in the eyes of the definers. True. But some definitions seem nicer than others. I know that raising such issues is useful because Stabler did it for "movement" (as did Chomsky before him). He argued that its virtues lie NOT in the mapping to s but in the nature of the Gs that do the mapping. G size might matter. Similarly, how far back one need look in a derivation tree might matter. Of course it might not. But, isn't considering things like this the business we are in.<br /><br />So, do I have hostility to derivation trees as the fundamental objects? No way. Do I understand how we can code all that we want entirely in these terms? Nope. That's where you come in. Always call a local expert when you run into problems. Thomas, I am calling.Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-87745005356646536522016-02-23T06:49:26.835-08:002016-02-23T06:49:26.835-08:00Norbert wrote: this does not yet imply, I don’t th...Norbert wrote: <i>this does not yet imply, I don’t think, that PMs are not important G like objects. At the very least they describe the kinds of information that we need to use to specify the class of licit derivation trees. Thus, we need an account of how information is brought forward in derivational time in derivation trees and, more importantly, what is not. Derived objects seem very useful in coding the conditions on G-licit derivation tree continuations. And as these are the very heart of modern GG theory (I would say the pride of what GG has discovered) we want to know how these are coded with PMs.</i><br /><br />Unfortunately I most likely won't have much time this week to participate further in this discussion, but just briefly: As far as I can tell, no one denies that some information has to be carried forward about the current "derivational state" in order to define which transformations (e.g. merge/move steps) are applicable at any particular point. And no one denies that full PMs of the traditional sort are sufficient to carry forward this information. To my mind, the question is whether all of the information that full traditional PMs encode is necessary. Almost certainly it is not all necessary, for the same reasons that various people have intuitions about something like phases: hand-waving a bit, only a certain amount of relatively recent history is relevant.Tim Hunterhttps://www.blogger.com/profile/11810503425508055407noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-26272593065630901352016-02-22T19:30:06.129-08:002016-02-22T19:30:06.129-08:00Already downloaded it, hopefully will have time to...Already downloaded it, hopefully will have time to read it tomorrow.Omerhttps://www.blogger.com/profile/06157677977442589563noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-55710717742217189432016-02-22T19:28:52.751-08:002016-02-22T19:28:52.751-08:00@Omer: Such are the perils of boiling down a paper...@Omer: Such are the perils of boiling down a paper to one catchy slogan. A better one would have focused on how the paper reconciles the fact that idioms are syntactically fine-grained but semantically atomic. Or how it ties it to psycholinguistcs. Or how the existence of idioms is completely unremarkable under a derivational perspective. Just look at Greg's paper and you'll see that it touches on a lot of things and bundles it all together in an intuitive fashion through derivation trees. Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-73721071417682549242016-02-22T18:29:14.743-08:002016-02-22T18:29:14.743-08:00@Thomas: I think you are factually wrong about tha...@Thomas: I think you are factually wrong about that. Idioms as the semantic analogue of suppletion, for example, is an idea that can be found in some of the earliest writing in Distributed Morphology. And so it was proposed independently (and I think chronologically prior to?) any talk of derivation trees in minimalism — see, e.g., Marantz 1997 (PLC proceedings), 2001 ("Words", unpublished ms.)Omerhttps://www.blogger.com/profile/06157677977442589563noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-17562519875293912732016-02-22T18:17:56.695-08:002016-02-22T18:17:56.695-08:00@Greg: That wouldn't work because, if the DAT ...@Greg: That wouldn't work because, if the DAT is subsequently clitic-doubled, then the features of the ABS become visible again should a yet-higher phi probe search for them. (That's in fact how you get ABS agreement in monoclausal ditransitives; but when the clause is non-finite — and hence lacks clitic doubling — a higher probe is able to target the embedded ABS only in the absence of a DAT.)Omerhttps://www.blogger.com/profile/06157677977442589563noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-879127132345564572016-02-22T14:22:34.301-08:002016-02-22T14:22:34.301-08:00Two more examples that I'm pretty fond of that...Two more examples that I'm pretty fond of that haven't been discussed on this blog yet: Greg's <a href="http://home.uchicago.edu/~gkobele/files/Kobele14LFCopy.pdf" rel="nofollow">unification of syntactic and semantic theories of ellipsis</a>, and his treatment of <a href="http://home.uchicago.edu/~gkobele/files/Kobele12Idioms.pdf" rel="nofollow">idioms as the semantic analogue of suppletion</a>. There's a lot more, such as Ed Stabler's top-down parser and the work by Greg, John Hale, Tim Hunter, and me and my students that builds on it, Tim's approach to adjunction, and so on. All of these ideas are strongly informed by the derivational perspective, to such an extent that they probably wouldn't have been proposed if we were still thinking in terms of derived trees.Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-42347631998346355832016-02-22T14:11:11.045-08:002016-02-22T14:11:11.045-08:00It seems to me that this post talks about several ...It seems to me that this post talks about several issues at once without clearly distinguishing them.<br /><br />1) Can we generate the same sound-meaning mappings without using PMs?<br />2) Are PMs Markovian in a sense that derivation trees are not?<br />3) Do PMs allow for more succinct descriptions of certain well-formedness conditions compared to derivation trees?<br />4) Are there structural conditions that are natural over PMs but unnatural over derivation trees? If so, are any of these conditions similar to what we observe in natural languages?<br /><br />Let's take them one by one:<br /><br />1) <b>Expressivity</b>: Yes, as you point out yourself.<br /><br />2) <b>Markovian</b>: Depends. If I read correctly between the lines, your worry is that we can easily put conditions on derivation trees that one might consider instances of look-ahead, whereas this is less natural from a PM perspective. But the devil is in the details here. I can take look-ahead constraints and compile them into the grammar so that they are enforced without look-ahead, so PMs do not protect you from look-ahead. At the same time, it is very simple to restrict the constrains on derivation trees to make them Markovian in your sense. In fact, this is already the case for standard MGs.<br /><br />3) <b>Succinctness</b>: Depends on your definition of succinctness. The situation is not at all like MGs VS MCFGs, where you have an exponential blow-up in grammar size. Over derivation trees, you can still reference every position that a phrase moves to. If you take some constraint like the Proper Binding Condition and look at its implementation over derived trees and derivation trees, the latter won't look much more complicated, though admittedly some additional complexity needs to be added to the derivational definition of c-command if you want it to be exactly identical to c-command in derived trees. But i) it is unclear that you need perfect identity, and ii) the difference in description length is still marginal.<br /><br />4) <b>Naturalness</b>: Depends on your definition of naturalness. Any effect brought about by movement could be considered unnatural over derivation trees because instead of the moving phrase you only have a Move node in the derivation. But this is only unnatural if you ignore the fact that Move nodes establish very close links to their movers --- links that syntacticians often think of as multi-dominance branches.<br /><br />The last point is also why I'm pretty puzzled whenever syntacticians are reticent to switch to derivation trees. The switch is very painless, it's pretty much the same as working with multi-dominance trees (yes, we usually drop the movement branches because they do not add any information for MGs, but that doesn't matter for the things syntacticians care about in their work). Yet despite being a superficially minor switch that is easy to adapt to, it opens up a multitude of new perspectives, which you can tell from the discussions we've had on this blog before. Recent examples include parallels between phonology and syntax, the status of successive cyclicity, hidden implications of sideward movement, and the connection between features and constraints.Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-15313264196154498782016-02-22T13:15:04.454-08:002016-02-22T13:15:04.454-08:00Both of you are correct. With a free choice of sta...Both of you are correct. With a free choice of states, locality restrictions are pointless since all non-local information is locally passed around by states --- the whole subregular hierarchy gets compressed into a single point. But transductions can indeed be computed with systems where the choice of states is much more restricted, as is the case with, say, the input strictly local functions. And then locality does matter because, intuitively, you can only refer to what you actually see in some subpart of the structure.<br /><br />None of these string transductions have been lifted to trees yet, afaik there isn't even a tree analogue of the subsequential transductions, which are much better known that their input/output strictly local subclasses. But it is very easy to see that movement cannot be output strictly local unless you have intermediate landing sites. That by itself will still not be enough because there is no upper bound on the distance between phase edges due to adjunction. But that's where the tiers come in, as Greg correctly surmised: project a tier that does not contain any nodes related to adjunction, and you (probably) get an upper bound on the distance between landing sites.Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-56490022531785944482016-02-22T12:04:39.347-08:002016-02-22T12:04:39.347-08:00@Greg: I was assuming that there were no states in...@Greg: I was assuming that there were no states in that sense ("the next output is determined <b>solely</b> by looking back at some finite portion of the preceding input and output").Alex Drummondhttps://www.blogger.com/profile/04676457657606185543noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-28784577009810647812016-02-22T12:00:02.870-08:002016-02-22T12:00:02.870-08:00@Alex: I meant that since you have a finite number...@Alex: I meant that since you have a finite number of states, you don't gain anything by temporarily writing down a finite amount of information, as you can just encode that in the state.Greg Kobelehttps://www.blogger.com/profile/08006251459440314496noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-89673638451164106572016-02-22T11:57:01.290-08:002016-02-22T11:57:01.290-08:00@Omer: I would first qualify the quote to read tha...@Omer: I would first qualify the quote to read that "Stablerian MGs often assume a particular constraint on movement (called the SMC) to the effect that ..." This is because Stablerian MGs are a general framework in which all of minimalism as she is practiced can be straightforwardly implemented.<br /><br />A closer-to-home example is multiple wh-questions, and some proposals (Frey, Gaertner, and Michaelis) follow Grewendorf in this regard.<br /><br />Wrt Basque, a (perhaps too?) simple approach would be to block off/erase/delete/make invisible the ABS's phi features once the DAT enters the derivation.Greg Kobelehttps://www.blogger.com/profile/08006251459440314496noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-89308889181895795522016-02-22T11:23:16.685-08:002016-02-22T11:23:16.685-08:00"[...] Stabler-like MGs (minimalist grammars)..."[...] Stabler-like MGs (minimalist grammars) deal with minimality effects by effectively supposing that they never arise (it is never the case in Stabler MGs that a licit derivation allows two objects to have the same accessible checkable features)."<br /><br />I am far too ignorant of the details of Stablerian MGs to know if this statement is true — but if it is, then I don't see how these grammars can be understood as models of natural language.<br /><br />Consider agreement intervention in Basque. It is easy to show that ABS nominals in Basque do not require licensing by a functional head. Additionally, non-clitic-doubled DAT nominals intervene in agreement relations targeting the ABS. And, crucially, neither the ABS nor the DAT have any "accessible checkable features" in that configuration, in that neither requires licensing-by-checking by the time the phi probe enters the derivation.<br /><br />The only way I can make sense of the above statement, then, is if we assume that ABS nominals have a feature that requires checking-by-agreement in all derivations except in those licit derivations where a DAT intervenes (and there are such derivations; you get 3sg agreement even with a non-3sg ABS argument). But at this point, haven't you just introduced a feature that amounts to [+/- minimality is in play]? Couldn't you just as easily add a [+ not in an island] feature to every wh-phrase that happens not to be in an island? I guess what I'm saying is that if that's the treatment of minimality, then you've pretty much admitted minimality is real and your system can't handle it.<br /><br />But like I said above, it is entirely (entirely!) possible that I have not fully understood what's at stake.<br />Omerhttps://www.blogger.com/profile/06157677977442589563noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-66764923243385153352016-02-22T10:48:28.435-08:002016-02-22T10:48:28.435-08:00@Greg: I'm not sure what you mean when you say...@Greg: I'm not sure what you mean when you say that this won't solve anything. The point was just that if the next output is determined solely by looking back at some finite portion of the preceding input and output, then it will not be possible to "remember" that a particular symbol occurred in the input for an indefinite period of time. So the transducer would have to insert something in the output every so often to "remind" itself. There's a loose analogy between that sort of process and successive-cyclic movement.Alex Drummondhttps://www.blogger.com/profile/04676457657606185543noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-29639248266728285352016-02-22T10:37:40.241-08:002016-02-22T10:37:40.241-08:00@Alex: This won't solve anything; you will sti...@Alex: This won't solve anything; you will still be able to remember exactly k units of information about moving things (per state), as if you have more moving things than that you will have forgotten about the earlier ones once you've written all the traces down.<br /><br />I think that Thomas must have been thinking of his tier-based representation for movement when he made that complexity claim. I don't see how to make any sense of it otherwise. (I still don't even in tiers, but I'm willing to give him the benefit of the doubt.)Greg Kobelehttps://www.blogger.com/profile/08006251459440314496noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-69256976974594324082016-02-22T10:23:49.937-08:002016-02-22T10:23:49.937-08:00> The gist is that bounding “makes sense” in a ...> The gist is that bounding “makes sense” in a theory where PMs are mapped into PMs in a computationally reasonable way. It is less clear why sticking traces into derived objects (if understood as yields (i.e. ss or ms in pairs) makes any sense at all given their interpretive vacuity.<br /><br />What I got out of the previous discussion on that point is that it’s possible to imagine certain kinds of finite state transducer that might need to insert traces. So thinking in string terms, imagine that each output token is determined by the most recent k input tokens together with the most recent k output tokens. This kind of transducer couldn’t put anything into “storage” for an indefinitely long time, and so would have to regularly copy dummy symbols into the output instead.Alex Drummondhttps://www.blogger.com/profile/04676457657606185543noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-77609178309976184152016-02-22T08:59:41.883-08:002016-02-22T08:59:41.883-08:00Marcus Kracht's work on compositionality addre...Marcus Kracht's work on compositionality addresses your<br />> idea that an expression can be semantically well formed even if syntactically illicit<br />The idea is that there are independent domains (phonological, categorial, semantic), each with their own operations. What we think of as the grammatical operations are really pairings (or triplings) of operations from each of those domains. One application of *merge* to an expression e = is the simultaneous application of the sound part to ph, the category part to c, and the meaning part to m. It is therefore no problem to trace the 'meaning' part of *merge* through an otherwise illegitimate series of derivational steps.<br /><br />I'm not sure that this is the right way to think about semantic well- despite syntactic ill-formedness. I think it is useful to distinguish three cases. 1) an underivable sound-meaning pair. 2) a derivable but otherwise ill-formed s-m pair (perhaps because there were three subjacency violations). 3) an s which is not part of any derivable s-m pair but which is a few cognitive `repairs' away from a derivable s'-m' pair. I am happy saying that cases like 3 abound; I have no problem understanding the speech of late second language acquirers of English, or of very early first language acquirers. A commonly accepted instance of case 2 comes from processing difficulties like center embeddings (but I used the example I did because I think that it is natural and faithful to the original proposal to view subjacency this way). It is attractive to me to think that all cases of something being "semantically well-formed despite being syntactically ill-formed" is an instance of either 2 or 3.Greg Kobelehttps://www.blogger.com/profile/08006251459440314496noreply@blogger.com