This post is intended as an intellectual provocation. It is the strongest version of a thought I've had knocking around in my head for quite a few years now, but not necessarily a version that I'd be willing to formally defend. Therefore, I urge readers not to lose sight of the fact that this is written in a blog; it is as much an attempt at thinking "out loud" and engaging in conversation as it is an attempt to convince anyone of anything. [Norbert has helped me think through some of these things, but I vehemently absolve him of any responsibility for them, and certainly of any implication that he agrees with me.]
My point of departure for this discussion is the following statement: were the mapping from phonetics to phonology to morphology to syntax to semantics to pragmatics isomorphic – or even 100% reliable – there would be little to no need for linguists. Much of the action, for the practicing linguist, lies precisely in those instances where the mapping breaks down, or is at least imperfect. That doesn't mean, of course, that the assumption that the mapping is isomorphic isn't a valid null hypothesis; it probably is. But an assumption is not the same as a substantive argument.
If you disagree with any of this, I'd be interested to hear it; in what follows, though, I will be taking this as a given.
So here goes...
––––––––––––––––––––
The last 15-20 years or so have seen a trend in syntactic argumentation, within what we may broadly characterize as the GB/Principles-and-Parameters/minimalism community, of treating facts about the interpretation of an utterance as dispositive in arguments about syntactic theory.
One response that I've received in the past when conveying this impression to colleagues is that all syntactic evidence is inexorably tied to interpretation, because (i) string-acceptability is just the question of whether utterance A is acceptable under at least one interpretation, and so (ii) string-acceptability is not different in kind from asking whether A is acceptable under interpretation X versus under interpretation Y. In fact, this reasoning goes, there really isn't such a thing as string-acceptability per se, since the task of testing string-acceptability amounts to asking a person, "Can you envision at least one context in which at least one of the interpretations of A is appropriate?"
I think this is too simplistic, since as we all know, there is still a contrast between Colorless green ideas sleep furiously and *Furiously sleep ideas green colorless. But even setting that aside for now, I don't think that the fact that an utterance A has at least one interpretation should be treated (by syntacticians) on a par with the fact that it has interpretation X but not interpretation Y. The reason is that the isomorphic mapping from syntax to semantics (or vice versa, for the purposes of this discussion) is a methodological heuristic, not a substantive argument (see above).
Let's illustrate using an example from locality. Evidence about locality can be gleaned in some instances from string-acceptability alone. That (1) is unacceptable does not depend on a particular interpretation – nor does it even depend on a particular theory of what an interpretation is (i.e., what the primitives of meaning are), for that matter.
(1) *What do you know the delivery guy that just brought us?
I therefore consider the unacceptability of (1) dispositive in syntactic argumentation (well, modulo the usual caveats about acceptability vs. grammaticality, I should say). On the other hand, the fact that (2) can only be interpreted as a question about reasons for knowing, not as a question about reasons for bringing, is not the same type of evidence.
(2) Why do you know the delivery guy that just brought us pizza?
To be clear, they are both evidence for the same thing. But they are not evidence of the same kind. And the provocation offered in this post is that they should not be afforded the same status in distinguishing between syntactic theories.
For the sake of argument, suppose we lived in a world where (2) did have both interpretations, but (1) was still bad. I, as a syntactician, would first try to find a syntactic reason for this. Failing that, however, I would be content with leaving that puzzle for semanticists to worry about. (Perhaps, in this counterfactual world, my semanticist friends would conclude that elements like why can participate in the same kind of semantic relationships that regulate the interaction between the logophoric centers of various clauses? I don't know if that makes any sense. Anyway, I won't try too hard to reason about what other people might do to explain something in a hypothetical world.) More importantly, I'd keep the theory of locality exactly as it is in our world. Obviously the other world would be a less pleasing world to live in. The theory of locality would enjoy less support in this hypothetical world than it does in our world. But the support lost in this counterfactual scenario would be circumstantial, not direct; it is the loss of semantic support for a syntactic theory.
There are (at least) two things you might be asking at this juncture. First, is this distinction real? Aren't we all linguists? Aren't we all after the same thing, at the end of the day? I think the answer depends on granularity. At one level, yes, we're all after the same thing: the nature and properties of that part of our mind that facilitates language. But insofar as we believe that the mechanism behind language is not a monolith; that syntax constitutes a part of it that is separate from interpretation; and that the mapping between the two is not guaranteed a priori to be perfect, then no: the syntactician is interested in a different part of the machine than the semanticist is.
Second, you might be asking this: even if these distinctions are real, why are they important? Why should we bother with them? My answer here is that losing sight of these distinctions risks palpable damage to the health of syntactic theory. Above, I noted that in research on syntax, evidence from interpretation should take a back seat to evidence from string-acceptability. But it feels to me like way too many people are content to posit movement-to-spec-of-TargetInterpretationP (or -ScopeP) without the understanding that, as long as the evidence provided is purely from interpretation, this is really just a semantic theory expressed in syntactic terms. (One might even say it is an 'abuse' of syntactic vocabulary, if one's point were to try and provoke.) This will end up being a valid syntactic theory only to the extent that the aforementioned syntax-semantics (or semantics-syntax) mapping turns out to be transparent. But – and this is the crux of my point – we already know that the mapping between the two isn't always transparent. (As an example, think of semantic vs. syntactic reconstruction.) And so such argumentation should be treated with skepticism, and its results should not be treated as "accepted truths" about syntax unless they can be corroborated using syntactic evidence proper, i.e., string-acceptability.
But take the "scope = c-command" hypothesis: it's been confirmed with true syntactic evidence so many times and ways that, at this point, it can be taken as the null hypothesis, don't you think? If so, analyses involving movement-to-ScopeP are on safe ground, syntactically. Or is it your suggestion that movement for scope (e.g.) only takes place when it is directly verifiable in the syntax, and not otherwise? Are quantifiers are in superposition until we go look?!
ReplyDeleteOn another note, as apparent exceptions to supposition (i) from your post (string acceptability requires interpretability), I've always been fascinated by sentences like the following, which, to my ear, are perfectly string-acceptable:
1) More people have been to Berlin than I have.
...but this is uninterpretable. If islands are sentences one can "think but not say", then (1) and others like it strike me as sentences you can say but not think. (Some have tried to convince me that an interpretation is coerced for (1) -- perhaps "lots of people have been there and, by the way, I haven't" -- but naive speakers never report this.)
Re:your first point, @Craig, I agree that if certain syntax-semantics correlations have been demonstrated beyond reproach, then one can construct arguments of the form "the semantics of X is Y, which we know correlates without exception to syntactic structure Z, therefore the structure of X is Z." The thing is, scope and c-command are not in fact in this tight of a relationship. And I'm not talking here about a Barker-style (Barkerian?) eschewing of c-command entirely. I'm talking about the fact that scope correlates with c-command except for when existentials are involved, and modulo reconstruction, and modulo QR and other covert movement, etc. etc. So using the same structure of the above thought experiment concerning islandhood, I think evidence for c-command from scope is indeed semantic evidence, unless and until corroborated with syntactic evidence proper.
DeleteNotice that I'm walking a fine line here, because I don't want this post (and discussion) to turn into mud-slinging at particular proposals/analyses that I personally think are guilty of this equivocation. Instead I want to focus on the methodological issue. But one result of this is that I end up speaking in generalities, so apologies for that.
Re:your second point, this was the reason for the big fat parenthetical, "(well, modulo the usual caveats about acceptability vs. grammaticality, I should say)." In fact these kinds of sentences – as well as the The cheese the cat the dog chased ate stunk type – were precisely what I had in mind. But I actually think this is mostly orthogonal to the point of the post. There are two ways to go with these sentences: you could develop a grammar (syntax, semantics, and everything) that predicts they are grammatical; or you could treat their acceptability as a fact about performance, and seek to develop a model of performance that links their acceptability to a grammar that deems them ungrammatical. Whichever avenue is chosen, though, I think the methodological principles that I advocate for in developing the grammar still hold.
I think I agree with the overall point of this post but disagree with some of the details. What I certainly strongly agree with is that there's a tendency to invoke "movement-to-spec-of-TargetInterpretationP (or -ScopeP)" on the basis of the fact that the relevant string has a certain interpretation, without paying due heed to the possibility that scope-taking might proceed by some mechanism that is not movement. (And even if we grant that some scope-taking happens via movement, for example Chinese wh-phrases or whatever one's favourite is, this still doesn't establish that all scope-taking happens via movement.)
ReplyDeleteI think the discussion of sentence (1) though is missing something:
(1) *What do you know the delivery guy that just brought us?
It's true that the unacceptability of this does not depend on a particular interpretation, i.e. you get the same result, namely unacceptability, no matter which interpretation you try to pair it up with. But how do we use this as evidence in forming a theory of locality? To do that we have to consider particular analyses of that string, and notice that such analyses are in violation of some locality constraint. What's interesting in this case is that the analysis according to which 'what' has moved from the second object position of 'brought' seems like it must be in violation of something, because pairing the string with the interpretation that we associate with that analysis yields unacceptability. (As above, the conclusion only holds if we assume certain things about what interpretations get associated with which analyses.) In other words, even though this string is indeed unacceptable regardless of interpretation, the fact that we actually use to advance our theory is the fact that it's unacceptable on a particular interpretation -- this informative fact is entailed by, but is not the same as, the fact that it's unacceptable regardless of interpretation. So for this reason, these kinds of cases don't seem all that different from cases that are acceptable on some interpretations and unacceptable on others: the consequences for syntactic theory depend on taking one interpretation at a time anyway.
By the way, I think you can do all this while still being very agnostic about the details of what meanings are. All you need to observe is that (1) is unacceptable on the interpretation that one would "expect" to be associated with it, given the interpretation associated with (2) and the relationship between the interpretations associated with (3) and (4).
(2) You know the delivery guy that just brought us something.
(3) What did you say the delivery guy just brought us?
(4) You said the delivery guy just brought us something.
@Tim: I disagree (shocking, I know). All you need to know to draw syntactic conclusions from the ungrammatically of (1) is that brought takes two objects, and that the "take object" property can – in the general case – be satisfied at a distance by elements like what. (If the verb to bring is only optionally ditransitive for you, substitute a verb like to hand here.) Crucially, this can be done without saying very much about what the sentence means, what the verb means, or what the semantic import of "take an object" is.
DeleteBut 'brought' doesn't have to take two objects, does it? We can say 'John brought beer'. So what tells us that 'what' has anything to do with the second object of 'brought' in (1)?
DeleteOf course I'm sure we can construct variants of the sentence involving obligatorily distransitive verbs, but we seem to be missing something if we need those cases rather than (1), aren't we?
I don't think so. We use non-optional complements to build a strong case for the relevant syntactic hypotheses, and then once those are properly motivated, we can just go ahead and use them to explain the optional-complementation cases, too. (This is why, when I teach intro to syntax, I use devour and not eat.)
DeleteIs this anything more than a defect introduced by me using brought instead of handed in the original post?
I'm not sure I follow. What I meant in my last paragraph above was just that if we imagine a world where there are in fact no obligatorily ditransitives like 'handed', then it would seem odd to say that in that world, unlike this one, there is a certain kind of evidence (i.e. the special interpretation-independent acceptability) that cannot be used as support for the locality constraint in (1).
DeleteOr, to try to make the same point a different way -- i.e. to try to find another case where we need interpretations rather than selectional requirements -- consider this sentence:
(5) What did the man who brought Mary see the woman who brought John?
This is unacceptable regardless of interpretation. But in particular, it's unacceptable paired with the "what did the man bring Mary" interpretation, which tells us that the analysis where 'what' is extracted from the subject violates some constraint; and it's unacceptable paired with the "what did the woman bring John" interpretation, which tells us that the analysis where 'what' is extracted from the object violates some constraint.
Now I suppose one could say that, since it's unacceptable on all interpretations, all the analyses we can come up with for it violate some constraint; from this it follows that the analysis where 'what' has been extracted from the subject violates some constraint, and it also follows that the analysis where 'what' has been extracted from the object violates some constraint. But this seems to be creating an artificial distinction between the way we use this evidence and the way we use the other kinds of cases like your original sentence (2).
If I'm reading between the lines correctly (please correct me if I'm wrong), your reason for wanting to draw this distinction is to separate out the cases where we can reach conclusions that are not dependent on assumptions about how interpretations relate to structures, i.e. cases where we do not need to apply the risky movement-to-spec-of-TargetInterpretationP logic that we both dislike. I think I agree with that in principle, and so maybe I'd agree in other cases that there's real additional "soundness" in reasoning without regard to interpretations, but in practice the semantic effects of fronting a wh-word are so clear that I don't think we're taking the same kind of risk when we work with sentences like (1).
Let me try to rephrase my point using your examples. What I'm saying is that the distinction between your (5) (the unacceptability of *What did the man who brought Mary see the woman who brought John?) and my (2) (the nonambiguity of Why do you know the delivery guy that just brought us pizza?) is not at all, as you put it, an "artificial distinction." It is a very substantive distinction. That the distinction turns out to be moot in the case of wh-movement is fine, but forgetting that it is there is precisely what invites the methodological slippery slope that ends with movement-to-spec-InterpretationP.
DeleteSo by no means am I suggesting that we jettison data like (2) from the body of evidence supporting the existence of island effects. But I am suggesting that we give them the status of circumstantial support – in particular, support that rests on the fact that collapsing an important distinction (string-acceptability vs. availability of interpretations) turned out to be innocuous in this particular domain.
This comment has been removed by the author.
DeleteThis comment has been removed by the author.
ReplyDeleteI can think of good interpretations for that string. For example, suppose a delivery guy gave us a lift to a party. Afterwards, I find out you knew him. In surprise, I utter (1), with a comma intonation after "what".
ReplyDeleteOr, suppose an indescribable beast which I can only refer to as "that" has just brought us a delivery guy and left him in a heap in the front hall. Just afterwards, I hear a weird little yelp and wonder aloud what it was. Answering my own question, I utter (1), now with a comma not after "what" but after "know".
Normal practice makes it safe to ignore silly readings. Isn't your approach going to create a big nuisance of finding sentences that don't have any?
Interesting question. One answer is that "string acceptability" is a proxy for acceptability of a phonation that includes intonation, prosodic breaks (or lack thereof), and so forth. I don't think that's an unreasonable methodological burden; it's the reason we don't do fieldwork over email (or other textual mediums), except when the informant is themselves a trained linguist.
DeletePerhaps what you are alluding to is that, like the slippery slope I alluded to in an earlier comment, there is also a slippery slope lurking on the PF side. For example, do sociolinguistic variables (accent, rate of speech, etc.) count for acceptability? The thing is, I think everyone is very aware that the mapping on the PF side is hardly transparent. You seldom encounter people going from "yawanna in yawanna go home? is one phonological word!" to "yawanna is a constituent!"
If I wanted to create less of a provocation, then, I could have said something like "Look, the mappings of syntax to its interfaces are both imperfect." (That would hardly be news to the practicing linguist.) But then the question is: Why the asymmetry? Why is that in practice, facts from interpretation get to automatically bear on syntactic argumentation, when the same is not true for facts from phonology?
This comment has been removed by the author.
DeleteThe unacceptability of example (1) could also reflect a semantic rather than a syntactic constraint. In the particular case of (1) this might not be so plausible, but there are other instances where people have argued that a certain string is unacceptable because it does not compose semantically (e.g. Keine & Poole's recent work on apparent intervention effects on tough movement). So I don't really see any contrast between the kind of evidence that (1) and (2) give us. The unacceptability of (1) under any interpretation could potentially be the result of either a syntactic or a semantic constraint, and and so could the unacceptability of (2) under a certain interpretation.
ReplyDeleteWe all agree that unacceptable sentences don't come with a flag that tells us why they are ungrammatical. Thus, my point was not that unacceptable strings are a priori guaranteed to be ungrammatical for syntactic reasons; my point was that if one wants to deduce something about syntax from their unacceptability, one does not need to lean nearly as heavily on what amount to heuristic assumptions about the syntax-semantics mapping.
DeleteI don't quite see why this would be true in general. Take (1):
Delete(3) *Cholesterol is important to Mary to avoid.
To deduce something about syntactic locality constraints on tough movement from the unacceptability of (3), it is minimally necessary to assume that Keine & Poole's alternative semantic analysis is false (in other words, that the semantic component is such that it can assign an interpretation to the syntactic structure of (3)). Of course, it is also necessary to make certain assumptions about the semantic component to run an argument that (2) exemplifies a certain syntactic locality constraint on movement of adjuncts. I'm not sure that either set of assumptions is obviously any richer or more controversial than the other. In general, almost any apparently syntactic constraint could in principle turn out to be a semantic one and vice versa, so we almost always have to make certain assumptions about the boundaries of and interface between those two domains, even if these are often tacit.
I see that while writing my comment, Alex beat me to mentioning tough-constructions.
Delete@Alex: I agree with you when you say, "In general, almost any apparently syntactic constraint could in principle turn out to be a semantic one and vice versa." As I said in a previous comment, this is never given a priori.
DeleteMy point is that there is just no way to reason about the syntactic implications of (2) without heavy leaning on the nature of the syntax-semantic mapping; but crucially, there is a way to reason about (1) without such leaning. Now, in any given case, it may turn out that reasoning about the syntactic implications of datum X without considering semantics happens to be wrong (for empirical reasons). And perhaps (3) is precisely such an X. But an attempt at purely syntactic reasoning about (2) strikes me as impossible from the get-go. That's the difference I'm trying to zoom in on.
This comment has been removed by the author.
ReplyDeleteInteresting discussion. I've read and heard similar arguments by Gereon Müller. I agree with you that the reliance on interpretation-based arguments *alone* can be (and often is) problematic, although I disagree on how problematic it is.
ReplyDeleteThe reason is that sometimes semantics happens to provide really nice arguments for what we believe in syntactic theory. Some examples: (i) (Perhaps) the best empirical arguments for economy come from Fox's work on QR. (ii) Even more rudimentary, the lambda calculus is a much cleaner system than the Theta Criterion, in particular if you believe the former is necessary regardless. (iii)Then, what about WCO in distinguishing A-movement and A'-movement? Binding evidence in general is about comparing a string against a set of interpretations.
That said, I can think of one good example illustrating the methodological issues that you've discussed, with the following logic: X and Y have the same interpretation Z, therefore X and Y must have the same underlying structure. This is pervasive in the work on tough-constructions: under the assumption that "John is easy to please" and "It's easy to please John" mean roughly the same thing, the tough-construction must be derived from the expletive construction. The problem is that such a derivation (where the tough-subject A'-moves and then A-moves) would violate Improper Movement, or on Hick's smuggling analysis Freezing—syntactic principles that buy us quite a bit. And here's the punchline: This "long movement" analysis goes against a wealth of *syntactic* facts showing that the two must have separate structures, mostly from Lasnik & Fiengo 1974. (Partee 1977 reaches roughly the same conclusion based on the semantics.)
Last, it's worth mentioning that the foundational work in natural language semantics is rather explicit about the mapping between syntax and semantics being a *homomorphism*, e.g. Montague's PTQ and UG and all of Barbara Partee's work.
@Ethan: Oh I can think of plenty of examples where syntax has been "led by the nose" by semantics, to fairly embarrassing results. But this is not an Omer's-opinion-of-various-theories post, it is a general methodological one :-)
DeleteAnd fair point re:isomorphism vs. homomorphism; I was being terminologically sloppy, which there's really no excuse for. But my position (in sharp contrast to everything in the Montagovian tradition, as far as I can tell) is that that too is a heuristic which we, as syntacticians, rely on at our methodological peril.
The point that I was trying to make is that there is a/some place in syntactic theory for semantic-based argumentation. We can only ever view syntax from the outside, i.e. from PF (string acceptability) and LF (interpretation). I don't see why we should necessarily favour PF over LF, or vice versa. Either can and has lead to bad theories. [Spec, InterpretationP] is not that different from [Spec, MorphemeP] or [Spec, PutSomeAdverbHereP]. But perhaps we are in agreement here. I just wanted to take a stance against be-inherently-skeptical-of-semantic-argumentation.
DeleteAh. So here we might truly differ. There is of course some amount of theory that goes into any observation (as others have rightly stressed in the comments here). But it seems to me that the amount of theory that goes into treating a morpheme as an "observable" is not nearly as involved as the theory that goes into treating an interpretation as an "observable." So if pressed, I think I would advocate a "favor PF over LF" methodology. (I should stress that this would be a methodological commitment, not a theoretical one. I.e., I don't think syntax is a derivation "towards PF" any more than I think it's a derivation "towards LF"; in fact, I think it's a derivation towards neither of them: I'm a big believer in single-output syntax.)
DeleteThis doesn't make movement-to-spec-MorphemeP a good theory. But I disagree that it has (or should have) the same status as movement-to-spec-InterpretationP.
I don’t disagree that simply proposing a semantically interpreted but null head to which covert movement takes place is really a reductio for an analysis. But I don’t think that that step is often either taken or when it is, is taken seriously. However, I don’t see any issue about using correlations between sentences and meanings as important evidence. Lilly bit Anson and Anson bit Lilly is a nice correlation between order (actually constituency) and interpretation in English that is phenomenologically robust and which syntactic theory should have something to say about. What about examples like Anson, Lilly bit? Acceptable string with a phenomenologically robust difference in meaning to Lilly bit Anson that you can operationalise by presenting the strings with previous contexts. Syntactic theory better have something to say about this correlation, because we don’t have any other theory that is able to systematically correlate form and interpretation. What about scope of universals from an embedded clause across a higher existential, versus the reverse case? Again, phenomenologically robust. Are scope effects like this to be analytically dealt with via movement to some higher syntactic position? Probably, but I’m willing to think its Cooper Storage, or even adopt the Glue Semantics view—though these are really just syntax using a different data structure than our standard syntactic objects or using a different combinatory procedure. But the crucial thing is that we still have a correlation between form (actually word class) and interpretation. Word class because the syntactic position of indefinite determiners is distinct from that of universals (every three days vs some three days). Interpretation because there is a phenomenologically robust distinction between the capacity of indefinites to scope outside finite clauses and that of universals. It’s not the semanticists responsibility to work out how to analyse that syntactic difference in word class, it’s ours and since it correlates with a difference in meaning, our syntactic representations would be well advised to be able to support that.
ReplyDeleteAnd when you say `string acceptability’ do you mean always in out of the blue contexts? Virtually all the meaning distinctions you want to make can be operationalised via differential string acceptability in controlled context. If you mean only in out of the blue contexts, then I think you fall into the problem Peter pointed out.
@David: Concerning the problem Peter pointed out, I think I answered that above. "String acceptability" means acceptability of an utterance, not an orthography. So yes, out-of-the-blue. (Maybe you don't find that answer satisfactory, but if not, I'd be interested to know why.)
DeleteConcerning the main issue, you say:
"It's not the semanticists' responsibility to work out how to analyse that syntactic difference in word class [between something like some and something like every], it's ours and since it correlates with a difference in meaning, our syntactic representations would be well advised to be able to support that."
I strongly disagree. I'm obviously not proposing that semantics ignore syntax; I believe in an interpretive semantics, i.e., one that interprets the (single) output of syntax and builds a meaning representation, whatever a meaning representation turns out to be. From here to concrete claims about how, exactly, syntax should support the relevant meaning difference, there is a fairly wide ravine. Maybe it's the different position of the quantifiers (as you suggest); maybe it's different features borne by each quantifier; maybe (in the worst of these possible worlds, I suppose), it comes down to syncategorematic rules imposed by each quantifier. And if a semanticist came to me and said that their interpretive procedure works best if the distinction is delivered to them in terms of the structural position of each quantifier, I'd say, "That's nice; we'll see if there is any syntactic evidence suggesting that such a distinction in structural position actually exists."
To zoom out from this specific example, when you say "Syntactic theory better have something to say about [these meaning-form correlations], because we don't have any other theory that is able to systematically correlate form and interpretation," I don't think you're right. The combination of syntactic theory with a theory of interpretive semantics better have something to say about these meaning-form correlations, and you can throw in morphology for good measure (lest we forget morphosemantics). Whether a particular meaning-form correlation is the purview of morphology, syntax, or the way semantics interprets one or both of these is, I think, an empirical question. The answer is certainly not guaranteed a priori to be "syntax." And so, as far as I can see, going from semantic facts directly to conclusions about syntax remains a perilous move.
So what counts as purely syntactic evidence? I just don't think that evidence comes with a little flag saying: hi, I'm syntactic evidence! Seems to me your methodological; imperative here would suggest we should just Jabberwockyize all of our data, so it's just bits of listed English morphology scattered through nonsense words that's the input to theorising. Take selection of arguments by obligatorily transitive verbs mentioned above. We can even drop these in certain contexts (repetitive actions, generics, cases with discourse topics, particular registers like recipes or instructions, etc), so we don't even have access to argument structure facts under this view. I don't think we'll get very far with such a dataset in developing a syntactic theory. But maybe I've misunderstood and you can explain after my next lecture. Oh, that's in 10 minutes!
ReplyDeleteI dunno, seems to me you can get pretty far in syntax with just checking which morphemes occur where. I agree that this requires some idealizing on both the semantic and the phonological side (see again my answer to Peter). But I just don't think that generally speaking, inferences from meaning to syntactic structure are as innocent as most practicing syntacticians take them to be.
DeleteLecture time!
I think Jabberwocky syntax is a brilliant idea. In fact, it would be a great way to train people with little linguistics background to understand the point of making the judgment. And it's not like the judgments are difficult to make:
Deletewhat did you snorp the packeldy mung that just grummed us?
More forples have been to Smargon than I have
Jeff, yeah, I've used this in classes sometimes, but the problem comes with things like the contrast between
Delete1. Anson handed/presented the book to David
2. Anson handed/*presented David the book
3. Anson *handed/presented David
vs
1. The glumfledorp snartfigged a nurkle to Pipsu
2. The glumfledorp snartfigged Pipsu a nurkle
3. The glumfledorp snartfigged Pipsu
judgements on the second set aren't that clear! You can extend this argument against all syntactic data being Jabberwockified quite far, I think. To get traction on it, you need the meaning to act as an anchor for what is going on in the judgments, which are fundamentally about patterns of meaning-form pairs.
That depends on what kind of syntax you're doing. You absolutely do not need the meaning to act as an anchor for the judgment in:
Delete(1) * The only blorg that is gleeping are klarping.
This judgment is not "fundamentally about patterns of meaning-form pairs." This might be a good place to remind everyone that the fundamental feature calculus device in minimalist syntax (Agree) is modeled after this kind of data (phi feature agreement). It is then used, analogously, to model things that do have semantic import (say, focus marking). This is exactly what you'd expect if semantic interpretation reads its input off of syntax: syntactic properties can form the basis for semantic distinctions, but don't have to.
Of course. I'm not saying there is no syntax! I'm saying that it's not sensible to restrict yourself to Jabberwockified data or you end up missing out large parts of syntax. See post above. This might be a good place to remind everyone that the fundamental structure building operation in minimalist syntax (Merge) is modelled after constituency data that is not available in many Jabberwockified sentences - especially in poorly inflected languages. It is then used to model things such as argument structure. I don't think we disagree that syntax feeds semantics, but I think we do disagree about flagging data as `this is more important than that'. I think that's a hiding to nothing.
DeleteI agree with your assessment of our disagreement, except that I'd replace 'important' with 'methodologically less fraught'.
DeleteAnd, as for poorly inflected languages – that's fine. We could investigate the relevant issues in richly inflected languages, first.
Delete