Those of you who don't work on case probably have in your heads some rough sketch of how case works. (e.g. Agree in person/number/gender between a designated head and a noun phrase, resulting in that noun phrase being case-marked.) What you need to realize is that basically nobody who actually works on case believes that this is how case works.Now, whether or not this is really how it all went down, possibly-apocryphal-Mark has a point. In fact, I'm here to tell you that his point holds not only of case, but of agreement, too.
In one sense, this situation is probably not all that unique to case & agreement. I'm sure presuppositions and focus alternatives don't actually work the way that I (whose education on these matters stopped at the introductory stage) think they work, either. The thing is, no less than the entire feature calculus of minimalist syntax is built on this purported model of case & agreement. [If you don't believe me, go read "The Minimalist Program" again; you'll find that things like the interpretable-uninterpretable distinction are founded on the (supposed) behavior of person/number/gender and case (277ff.).] And it is a model of case & agreement that – to repeat – simply doesn't work.
So what model am I talking about? I'm really talking about a pair of intertwined theories of case and of agreement, which work roughly as follows:
- there is a Case Filter, and it is implemented through feature-checking: each noun phrase is born with a case feature that, were it to reach the interfaces (PF/LF) unchecked, would cause ungrammaticality (a.k.a., a "crash"); this feature is checked when the noun phrase enters into an agreement relation with an appropriate functional head (T0, v0, etc.), and only if this agreement relation involves the full set of nominal phi features (person, number, gender)
- agreement is also based on feature-checking: the aforementioned functional heads (T0, v0, etc.) carry "uninterpretable person/number/gender features"; if these reach the interfaces (PF/LF) unchecked, the result is – you guessed it – ungrammaticality (a.k.a., a "crash"); these uninterpretable features get checked when they are overwritten with the valued person/number/gender features found on the noun phrase
From the vantage point of 2016, however, I think it is quite safe to say that none of this is right. And, in fact, even the Abstractness Gambit (the idea that (1) and (2) are operative in the syntax, but morphology obscures their effects) cannot save this theory.
What follows builds heavily on some of my own work (though far from exclusively so; some of the giants whose shoulders I am standing on include Marantz, Rezac, Bobaljik, and definitely-not-apocryphal Mark Baker) – and so I apologize in advance if some of this comes across as self-promoting.
––––––––––––––––––––
Let's start with (1). Absolutive(=ABS) is a structural case, but there are ABS noun phrases that could not possibly have been agreed with, living happily in grammatical Basque sentences. How do we know they could not possibly have been agreed with (not even "abstractly")? Because we know that (non-clitic-doubled) dative arguments in Basque block agreement with a lower ABS noun phrase, and we can look specifically at ABS arguments that have a dative coargument. (Indeed, when the dative coargument is removed or clitic-doubled, morphologically overt agreement with the ABS – impossible in the presence of the dative coargument – becomes possible.)
So if an ABS noun phrase in Basque has a dative coargument, we know that this ABS noun phrase could not have been targeted for agreement by a head like v0 or T0 (because they are higher than the dative coargument). Notice that this rules out agreement with these heads regardless of whether that supposed agreement is overt or not; it is a matter of structural height, coupled with minimality. The distribution of overt agreement here serves only to confirm what our structural analysis already leads us to expect.
And yet despite the fact that it could not have been targeted for agreement, there is our ABS noun phrase, living its life, Case Filter be damned. [For the curious, note that this is crucially different from seemingly similar Icelandic facts, which Bobaljik (2008) suggests might be handled in terms of restructuring. That is because whether the embedded predicate is ditransitive (=has a dative argument) or monotransitive (=lacks one) cannot, to the best of my knowledge, affect the restructuring possibilities of the embedding predicate one bit.]
If you would like to read more about this, see my 2011 paper in NLLT, in particular pp. 929 onward. (That paper builds on the analysis of the relevant Basque constructions that was in my 2009 LI paper, so if you have questions about the analysis itself, that's the place to look.)
––––––––––––––––––––
Moving to (2), this is demonstrably false, as well. This can be shown using data from the K'ichean languages (a branch of Mayan). These languages have a construction in which the verb agrees either with the subject or with the object, depending on which of the two bears marked features. So, for example, Subj:3sg+Obj:3pl will yield the same agreement marking (3pl) as Subj:3pl+Obj:3sg will. It is relatively straightforward to show that this is not an instance of Multiple Agree (i.e., the verb does not "agree with both arguments"), but rather an instance of the agreeing head looking only for marked features, and skipping constituents that don't bear the features it is looking for. Just like an interrogative C0 will skip a non-[wh] subject to target a [wh] object, so will the verb in this construction skip a [sg] (i.e., non-[pl]) subject to target a [pl] object.
This teaches us that 3sg noun phrases are not viable targets for the relevant head in K'ichean. Ah, but now you might ask: "What if both the subject and the object are 3sg?" The facts are that such a configuration is (unsurprisingly) fine, and an agreement form which is glossed as "3sg" shows up in this case (so to speak; it is actually phonologically null). That's all well and good; but what happened to the unchecked uninterpretable person/number/gender features on the head? Remember, they couldn't have been checked, because everything is now 3sg. And if 3sg things were viable targets for this head, then you could get "3sg" agreement in a Subj:3sg+Obj:3pl configuration, too – by simply targeting the subject – but in actuality, you can't. [This line of reasoning is resistant even to the "but what about null expletives?" gambit: if the uninterpretable phi features on the head were checked by a null expletive, then either the expletive is formally plural or formally singular. If it is singular, then we already know it could not have been a viable target for this head; if it is plural, and it has been targeted for agreement, then we predict plural agreement morphology, contrary to fact. Thus, alternatives based on a null expletive do not work here.]
What about Last Resort? It is entirely possible that grammar has an operation that swoops in should any "uninterpretable features" have made it to the interface unchecked, and deletes the offending features. But now ask yourself this: what prevents this operation from swooping in and deleting the features on the head even when there was a viable agreement target there for the taking (e.g. a 3pl nominal)? i.e., why can't you just gratuitously fail to agree with an available target, and just have the Last Resort operation take care of your unchecked features later? The only possible answer is that the grammar "knows that this would be cheating"; the grammar makes sure the Last Resort is just that – a last resort – it keeps track of whether you could have agreed with a nominal, and only if you couldn't have are you then eligible for the deletion of offending features. Put another way, the compulsion to agree with an available target is not reducible to just the state of the relevant features once they reach the interfaces; it is obligatory independently of such considerations. You see where this is going: if this bookkeeping / independent obligatoriness is going on anyway, uninterpretable features become 100% redundant. They bear exactly none of the empirical burden (i.e., there is no single derivation in the entire grammar that would be ruled out by unchecked features, only by illicit application of the Last Resort operation).
Bottom line: there is no grammatical device of any efficacy that corresponds to this notion of "uninterpretable person/number/gender feature."
––––––––––––––––––––
At this juncture, you might wonder what, exactly, I'm proposing in lieu of (1-2). The really, really short version is this: agreement and case are transformations, in the sense that they are obligatory when their structural description is met, and irrelevant otherwise. (Retro, ain't it?) To see what I mean, and how this solves the problems associated with (1) and (2), I'm afraid you'll have to read some of my published work. In particular, chapters 5, 8, and 9 of my 2014 book. Again, sorry for the self-promotional nature of this.
––––––––––––––––––––
Epilogue:
Every practicing linguist has, in their head, a "toy theory" of various phenomena that are not that linguist's primary focus. This is natural and probably necessary, because no one can be an expert in everything. The difference, when it comes to case and especially when it comes to agreement, is that these phenomena have been (implicitly or explicitly, rightly or wrongly) taken as the exemplar of feature interaction in grammar. And so other members of the field have (implicitly or explicitly) taken this toy theory of case & agreement as a model of how their own feature systems should work.
And lest you think I have constructed a straw-man, let me end with an example. If you follow my own work, you know that I have been involved in a debate or two recently where my position has amounted to "such and such phenomenon X is not reducible to the same mechanism that underlies agreement in person/number/gender." What strikes me about these debates is the following: if A is the mechanism that underlies agreement, these (attempted) reductions are not reductions-to-A at all; they are reductions-to-the-LING-101-version-of-A (e.g. Chomsky's Agree), which – to paraphrase possibly-apocryphal-Mark – nobody who works on agreement thinks (or, at least, nobody who works on agreement should think) is a viable theory of agreement.
Now, it is logically possible that a feature calculus that was invented to capture agreement in person/number/gender (e.g. Agree), and turns out to be ill-suited for that purpose, is nevertheless – by sheer coincidence – the right theory for some other phenomenon (or set of phenomena) X. But even if that turns out to be the case, because the mechanism in question doesn't account for agreement in the first place, there is no "reduction" here at all.
This comment has been removed by the author.
ReplyDeleteI think it is HILARIOUS that comment deletion on FoL leaves traces.
Delete(Deleted this comment and reposted below with "notify me" checked. I suppose that violates the inclusiveness condition??)
As someone who works on case and agreement, I have to quibble with certain aspects of how you characterize our consensus. I take this as a friendly amendment, given that your "bottom line" (`there is no grammatical device of any efficacy that corresponds to this notion of "uninterpretable person/number/gender feature" ') is really about how case and agreement come to be obligatory in certain languages/configurations, NOT about what sorts of operations actually underlie them when they happen.
ReplyDelete1. "Case & agreement live in something of a happy symbiosis": not so controversial an idea among case/agreement folks (although Baker himself rejects it for accusative/object agreement). Most work on this topic still assumes that case and agreement are assigned by some version of Agree. (Certainly, configurational case theories are currently trendy, but I would not say that they have swept the field.) Here are some authors whose work is relevant: Aldridge, Coon, Deal, Legate, Woolford [yep, all women!]
2. The Case Filter: not so controversial. Baker and Vinokurova (2010) appeal to it explicitly, and most other recent work on case either adopts it or doesn't say anything about it.
Please note that these comments are about what I take the consensus to be, without endorsement.
@Amy Rose: You're absolutely correct – the list of Marantz, Rezac, Bobaljik, and Baker was not intended as a comprehensive or even representative list of people working on case/agreement, but simply as a list of people whose work the subsequent theoretical points were based on.
DeleteMore generally, you might be right that the consensus is not quite aligned with what I say here. The post acknowledges that this is so w.r.t. agreement; but you've convinced me that it is so for case, too.
Of course, the relevant theoretical points are unaffected by this (as your comment already indicates), but I was perhaps hasty in my (and possibly-apocryphal-Mark's!) characterization of the consensus – if there even is one – among generativists working on case.
As someone who does not work on case or agreement per se, I'm in no way able to quibble with your conclusion, but I will say that it raises a massive puzzle.
ReplyDeleteRegardless of their use as theoretical constructs, uninterpretable features are an empirical fact. So, "-s" on verbs in English spells out present tense, third person, and singular. Tense is certainly interpretable on a verb, but [3Sg] is not.
So, if we accept your conclusion that "there is no grammatical device of any efficacy that corresponds to this notion of 'uninterpretable person/number/gender feature,'" we are left with the puzzling fact that uninterpretable person/number/gender features exist at all.
@Dan: I completely agree with you that, given how we've been accustomed to think about "language design", the existence of uninterpretable features is a mystery. But I actually think that this mystery would have existed even in a world where the MP-style theory worked: even if Agree serves to delete features that would otherwise cause a crash, that still doesn't explain why those features were there in the first place. I presume we both agree that "in order to drive agreement" is not a real answer. (Norbert and I were actually having this very discussion in my office a couple of weeks ago, and he made this point.)
DeleteI'll reply, first, with a semantics hat on: not so fast! It's certainly possible to give a semantics to verbal inflection if you want to; it's also easy to set up a semantics to ignore information that isn't relevant to it. The second strategy corresponds to Heim and Kratzer. The first is found in Categorial Grammar work, if memory serves (in fact I feel quite sure Jacobson has this somewhere).
DeleteI'll add that, quite independent of the consensus among agreement/case theorists, I think Omer is quite right to quibble with uninterpretable features. There are cases where a derivation survives what we could call "underagreement", which he's emphasized here and in other places. So if the probe probes because it has a formal need to get features, that formal need had better not cause a crash if left unmet. There are also cases where probing continues even after a probe has gotten phi-features -- "overagreement" -- which I discuss in recent work (Interaction and Satisfaction in phi-agreement, http://ling.auf.net/lingbuzz/002610 ). If probing is driven by "missing" features on the probe (or a need to check a box on the probe), this is again mysterious.
@Omer I agree, although before Move and Merge were unified, tying uF's to displacement made a bit of sense. In fact, I think the fact that it brings this question front and centre is a feature, rather than a bug, of your conclusion. It's good to be reminded of the big puzzles of language.
Delete@Amy Rose, A cursory search turned up Dowty and Jacobson (1988) ("Agreement as a Semantic Phenomenon"), which seems to focus mainly on the interaction of gender and binding. That also lead me to a note by Dowty disavowing the strong claim of the paper that "ALL instances of agreement can be treated as semantic/pragmatic in nature." He still believes that some agreement is semantic/pragmatic, though, but this still leaves a puzzling residue of agreement that can't yet be shown to be semantic.
@Dan: the real puzzle is truly formal gender. If the feature has no interpretation anywhere, then it can't have an interpretation in the verbal system. If it has an interpretation in DP, then it can have a parallel interpretation in TP, etc. Agreement is probably a red herring -- formal gender would be just as mysterious from a 'design' perspective even if it never got involved in agreement (but was, say, only ever exponed on nouns themselves).
DeleteI think that Omer and David get into this below, but the issue of interpretability is largely an issue concerning the obligatory application of some operation. It is not a question of whether a feature can get an interpretation but whether it must be related to something else for a derivation to converge. One of the ways that Gs force such relations is via interpretability. What forces agreement? That's the G question.
DeleteOne more point: I think that in addition to gender, the other "hard" feature is case. Why it should exist in a well defined system is unclear to me. It really seems to have no interpretation. This is why, I assume, that Chomsky wanted to reduce it to some reflex of agreement (though why this explained it is also unclear to me).
DeleteMaybe this is not the place to ask the question, but as someone who does not work (much) on agreement, I have always wanted someone to explain the following to me. Why is agreement asserted to be mysterious from a `design´ perspective? I think it is quite solid and defensible to claim that there is `agreement´ morphology sprinkled all over language, and that this is a pervasive fact about natural language systems (by which I mean, that the marking has got to do with making formal connection to another symbol in the hierarchical representation, without any necessary semantic import). The question is why that should be? And the question framed in those terms makes sense to me. But framing the question as an imperfection does not. For example, it would be interesting to explore the idea of agreement being precisely the kind of design feature that where overt cues of connectivity are required to mediate the translation between symbolic and hierarchical structuring and linearisation/spell-out which disrupt the localities and connections in the former domain by having to be squashed into the serial stream. Or we could explore 3rd factor design feature ideas of redundancy that allow swifter and more efficient decoding, as in phonological coarticulation and spreading phenomena. It seems to me that the discourse of mysterious uninterpretable features and interface crashing is a notation that is closing us off in the realms of theoretical speculation as well. (Of course the connectivity idea does not apply to languages with form classes where there is no agreement that picks up on it, but maybe those are just lexicon organising principles, or like last names in a country with only 4 last names. Is Naming semantics?)
ReplyDeleteUninterpretable features in the minimalist program are a way of implementing a system that forces agreement-like exponence in the places where it seems to be obligatory. But as an abstract overall strategy it seems it is too strong and general to actually do the job, given the cases of over and under agreement that we see. So we need to fix the basic architectural assumption and move on, right? Since I was doing catch up after a long hiatus (Writing Cave), I also read Norbert´s entry on Theory, Again, and it seemed to me that he was complaining about just the kind of thing that gets the stuffing knocked out of it from Omer´s post about case and agreement.
I guess the real mystery (for me) is the fact that displacement and agreement coexist in a system, and often co-occur in a derivation. If agreement were a method for marking long-distance dependencies, why does it often happen between verb and its subject, especially when the subject has overtly moved to be local to the verb. And if it is a redundancy, that's pretty puzzling, as redundancy is a hallmark of engineered, rather than evolved systems.
Delete@Gillian: I think you're right to invoke redundancy as a relevant third factor; that's the direction I've pushed in my interaction/satisfaction work. Your phonology examples are good evidence that we can't simply stipulate that natural phenomena don't feature redundancy. (Perhaps in discussion of natural systems there's a tendency to rebrand redundancy as"symmetry"?)
DeleteThanks. I will check it out. I am interested in figuring out the right answer to this question. G
DeleteI think I have two issues with Omer's characterization of what is going on. The first is that I don't think Agree is a theory of agreement, and I don't really think anyone ever thought it was. It's a theory of dependency. So there are Agree based approaches to selection (including non-local EPP style selection), tense and participial inflection, and negative concord, wh-dependency, relative-dependencies, ..., etc, in addition to Agree based approaches to Case and predicate-argument phi-agreement. Moreover, I think the consensus is that Agree is not a very good theory of certain kinds of phi-agreement (like concord). So, whatever Chomsky might have used to motivate Agree, it's been used in the literature mainly as an approach to encoding dependency between units in structures. And uninterpretability/unvaluedness are just ways of encoding that a dependency is to be initiated.
ReplyDeleteI’m also not compelled by the empirical arguments. Omer presents them as though they are knock down, but they’re not. They are analysis dependent and so arguments about the nature of the best explanation. Now, I haven’t worked on these phenomena, but the Basque absolutive doesn’t show the impossibility of an Agree type analysis for case, it just says that, given a particular analysis of the phenomena, an Agree based approach to case will fail. One could have a different analysis where that isn’t a consequence (for example, the Appl head introducing the Dative might set up case checking with the lower absolute in the way that Daniel Harbour and I suggested for Kiowa).
On K’ichean, Omer makes a more interesting argument that a Last Resort operation would be no explanation because nothing would prevent it from deleting features if the probe gratuitously failed to agree with a target and then Last Resort deleted its features, leading to lack of plural agreement when plural agreement should be obligatory. But that makes an extra assumption: that Agree can fail when its conditions of application are met. I’m not going to grant that assumption. Let’s say instead that Agree applies whenever its structural description is met, which I think is the standard view: you see an uninterpretable/unvalued feature in a c-command relationship with a matching interpretable/valued feature and the outcome is that the uninterpretable one is valued and checked (see, for example, the definition of Agree in a standard textbook, say, Adger 2003 ;-)). In the case of 3sg agreement with a 3sg subj and 3sg obj, Agree doesn’t apply because there are no viable targets and Last Resort deletes unchecked 3sg on the higher head. What surfaces is 3sg agreement. In the case of a plural obj (say), Agree’s SD is met, it applies, and the probe is checked/valued, and morphologically interpreted as plural. LR doesn’t apply because it only sweeps in to delete 3sg features that are unchecked.
I’m not saying I think that this Last Resort story is the right one; just that the presence of default agreement phenomena of this sort are not arguments against a model where syntactic dependencies are triggered by diacritics on features. I’m not even saying Agree is a good theory - it’s in need of some deconstruction into antecedent mechanisms, without doubt (my own personal view is that it’s a side effect of Select operating over sub lexical data-structures). And I’m not saying that Omer’s conclusions aren’t right. I just don’t think there’s a knock down argument against incorporating Agree into the theory in more or less the way that most people do (i.e. as a theory of dependency and dependency triggers).
Three comments, David:
Delete1. "The [arguments] are analysis dependent" – I'm no expert on the philosophy of science but that strikes me as tautological; when are arguments ever not analysis-dependent?
2. The Kiowa-style you propose won't work for Basque, because – unlike in Kiowa – there is no ABS-DAT syncretism. If the Appl checks the case on the Theme argument, why does it look, act, enter into agreement relations, etc. etc. exactly like the argument that gets its case checked by v in monotransitives? And, concomitantly, why does the argument that gets its case checked by v in ditransitives (the dative) behave differently? Note that this is not a "quirky case" issue; this is systematic across all mono- and ditransitives. So if you're comfortable ignoring the surface forms to this degree, I'm afraid that your theory – while it might be a theory of something – is not a theory of case & agreement in Basque.
3. Your alternative for K'ichean is not an alternative – it is my theory. The whole point of the book is that obligatoriness cannot be reduced to the featural state of the representations it leaves in its wake. And your alternative concedes this point, and thus there is no argument. Again: I'm not arguing that agreement is not driven by featural diacritics (in fact, the FIND operation I propose as an alternative to Agree is driven in the same way), I'm arguing that its obligatoriness is not reducible to interface conditions & crashes. In your scenario, if the system gratuitously failed to apply the Agree operation in the presence of a 3pl subj or obj, the Last Resort operation would still swoop in and fix this, and no interface violation would arise. The only way to militate against that is to say that the obligatoriness is intrinsic in the mechanism, as it is in the Structural Description, Structural Change model you allude to. And you'll notice that I'm arguing for the very same thing when I say case & agreement are transformations in the derivational sense.
Glad there's no disagreement, but in that case your theory is the theory I laid out in my textbook in 2003, and I was just saying there what I took to be entirely uncontroversial and pretty standard at the time: Agree applies when its SD is met. Doesn't almost everyone think that?
DeleteNot at all. Witness all the discussion about "crashes" / ungrammaticality due to unchecked features / interface-driven syntax. You'll notice that, on the story you yourself sketched, no derivation is ever ruled out at the interface by an unchecked feature (either the SD was met and therefore the operation had to apply, or the Last Resort operation swoops in and fixes it). So, again, (un)interpretability plays no role. We can still call these features by any name we want (unvalued/uninterpretable/unfulfilled/...), but they are just syntax-internal instructions to perform a certain valuation operation at a certain time. Which, it seems to me, is too bad if you're into reducing syntax to Merge and nothing else.
DeleteI can tell you that, until I came across these K'ichean facts, I got a ton of pushback for arguing this.
See my comment below which went astray. I think it's pretty uncontroversial to take `uninterpretability'/'unvaluedness' to be a formal property of a feature that triggers an operation as opposed to having an optional agree operation that applies to match features and deletes uninterpretable matched features (which I thought went out the window a long time ago). But it's good to have the fact that there is an issue clarified. I think I was misunderstanding what your proposal really was, but if it's just that Agree is an operation triggered by some diacritic, I'm on board, and have been for quite some time. In that `feature structure' paper, that's the system that is defined. A brief quote "With this definition in place, we can now make our syntactic structure building rules sensitive to the presence of the u prefix, ensuring that when a feature bears such a prefix, there must be another feature in the structure which is exactly the same, but lacks the prefix. ... [example] ... Note that the prefix is doing purely formal work in this system." (Mind you, I do still, in that system, assume that there's a Full Interpretation constraint for selectional features, but these days I'd just ditch such features altogether).
Deleteand ps: if that's all you're saying, then that's not an argument against the uninterpretability diacritic, right? That diacritic is just a means of triggering the operation - there's a long discussion of this issue in my 2010 `minimalist theory of feature structure' paper: you need to separate the mechanism (uninterpretability or unvaluedness as a trigger for a dependency between matching features) from what the motivation for such a diacritic might be (maybe interpretability at an interface). Your bottom line was "there is no grammatical device of any efficacy that corresponds to this notion of "uninterpretable person/number/gender feature", but noone thinks that `uninterpretable person' is a `device', they think that `uninterpretable' is a diacritic (what Peter and I called a second order feature) on another feature. So I now don't see why your theory is any different from most versions of derivational minimalism.
ReplyDeleteMight one put Omer's point as follows: once one goes in for an obligatory rule of agreement (with SDs and SCs) then what is the value added of the diacritic? Wasn't -interp just there to FORCE agreement in a system with optional rules when understood in a minimalist setting (see next post for discussion).
DeleteDavid: I think the issue is that, at this point, talking about this all in terms of 'interpretability' is hugely misleading, because we have established that this device (sorry, I don't see your issue with this term; second order features are yet another theoretical device, from where I sit) has nothing to do with interpretability, interpretation, or any other interface phenomenon. It's an instruction: "DO RULE HERE." That's a far cry from Chomsky's Agree (which in my view, was always a response to interpretability requirements) and the whole dialectic of "crashes" which still pervades syntactic theory.
DeleteOk, but I guess I fail to see that that nomenclature (interpretability) matters really, as long as people know what they are actually doing, and I do think that most people treat uninterpretability as an instruction to create a dependency with a matching feature (in that paper, from 2008, I called it the `match-me' property). Maybe I'm wrong about this and people don't think of interpretability as a diacritic, in which case I'm happy you're bringing some clarity to this issue, and have arguments that go in the direction they do. But I'm skeptical that what you are railing against is indeed `received wisdom'. Maybe in some small areas of the East Coast of the USA ;-).
DeleteOn `device', sorry, I was assuming the interpretation of `grammatical device' as `grammatical mechanism' as opposed to `theoretical posit'. If you just meant the latter then "there is no grammatical device of any efficacy that corresponds to this notion of "uninterpretable person/number/gender feature" is true if "interpretable" is really meaningful, but again, I don't think it's usually taken to be so, which is why people informally talk about interpretability/valuedness etc. They're just metaphors.
Take for example the theory Gillian and I laid out in our 2005 LI paper. There, some features in lexical items are unvalued when the LI is Merged. That's just a dumb property of the feature that triggers Agree to happen. The interpretation of the valued features is then collapsed through the dependency that is formed to a single position in the structure (where the semantic types make that features interpretation viable). Our Interpret Once Under Agree principle only makes sense in a system where some property of a feature triggers the matching operation. And that was over a decade ago.
@Norbert I think the issue is that everyone needs a diacritic; even Omer (is that right Omer?). Sometimes a dependency is set up, and sometimes it's not. So we need something to tell us that. I make this argument again in that old features paper basically saying that features have to have more to them than just their presence or absence, or even the simplest agreement dependencies can't be captured.
David: Yes, of course, everybody needs a diacritic. Something needs to tell the system that T needs to probe for phi features and, say, N doesn't need to. (Side note: if you are big proponent of abstractness, I suppose you could just assume that every category agrees in phi features with every other category, and language-specific morphophonologies choose to realize just a subset of these relations. Then you don't need diacritics. That strikes me as the least interesting of all possible theories, though. And as you already know, someone has now produced an argument against the idea that systematically null agreement ever really exists. So the diacritic is also needed in order to tell you that T probes in Hebrew, but not in Japanese. (And of course T is not involved in the assignment of NOM in either language.))
DeleteI'm glad you (and Gillian) are on board with the idea that these diacritics are not reducible to output filters (and ipso facto, that they don't reduce to anything that the interface might reasonably care about.) But I assure you that I have encountered this view in countless places, not just on the East Coast, and not even just in North America. Far from it. As Norbert's subsequent post makes clear, this idea has been recurring in linguistic theory since at least the late 70's (perhaps ushered in by Filters and Control). Anyone who has ever championed the (supposed) truth of the Strong Minimalist Thesis implicitly subscribes to the view that everything but Merge is the result of "third factors", hence the interfaces, hence it must take the form of an output filter. So I really don't buy your "nothing to see here" attitude about all this.
(Remember: on the account you sketched for K'ichean Agent-Focus agreement, no single derivation is ruled out by an output filter. And the same would actually be true for English, too, on that account.)
I'm not saying: nothing to see here. I'm just saying that it's not received wisdom. I think systems which treat uninterpretability as a diacritic triggering operations have been around for a long time now, and are pretty widespread, but, don't get me wrong, I appreciate new arguments for that kind of system as opposed to others. But I don't think your proposal is revolutionary in any sense, and I suppose the way the post was written was making me think it was going to be.
DeleteI don't think, though, that the non-interface nature of Agree triggers causes problems for `reducing everything to Merge', though I used to (in fact I asked Chomsky about that issue way back at the Cyprus conference and was unconvinced by the answer). I now think that it should reduce to Select, which is a subpart of the Merge computation. If that works, and I hinted at how to make it work in that third Baggett lecture, then it's not necessary it derives from an output filter.
Haha, I have no idea how a post that urges a return to (one component of) the 60s' theory – transformations – could be read as if it was proposing something "revolutionary" :-D
DeleteOf course the interesting part is the evidence that case & agreement could not be handled otherwise.
(And I still don't understand how any of this is more analysis-dependent than any other argument in linguistic theory.)
Well, you said that the entire feature calculus of the minimalist programme was built on something foundationally wrong and you were going to show how. So I thought it was going to be quite revolutionary, what with the bold fonts etc. But it seems that what is at issue isn't the entire feature calculus of minimalism at all, since one pretty well established version of that is just what you want to say. So really, as you say, the contribution is new arguments that that's the right way to go. And you don't need to go back to early TG fir rule triggers. They are all around us. That's the whole point of the debates on triggered Merge etc. I guess I was hoping we'd disagree more :-).
DeleteObviously, you can define "the feature calculus of minimalism" however you want. But reading The Minimalist Program, Minimalist Inquiries: The Framework, and Derivation by Phase (Chomsky 1995, 2000, and 2001 respectively), I see a lot of talk about "convergence", "crashes", and "checking". To me, a feature calculus consists in more than the fact that a feature value originating in one place can show up in another; that is nearly contentless, given that the desideratum is the transmission of values. In these works, what makes the system run, what compels value transmission to happen, and what is the gatekeeper of grammaticality and ungrammaticality, is "convergence" and "checking". That, to me, is a contentful feature calculus. And like all good theories, it's one that makes predictions, and those predictions are falsifiable – to wit, they have been falsified.
DeleteAnd with all due respect to Adger 2003, it seems to me that most people take their lead from those works. I'm seemingly forever attending talks and reading papers where derivations do or don't converge, where ungrammaticality arises because nothing can check <feature F> on <head H>. If you are not, good for you; I'm envious.
I will also say that I think the post was pretty clear about these things, David. I set up (1) and (2) very early on, and this is what I called a "feature calculus" and what I was arguing against. Don't like that terminology? Fine. But I think it's pretty clear that these were the targets of argumentation. And like I said, if only you were right that the field had moved past them...
DeleteFair enough. I guess I read the sentence "no less than the entire feature calculus of minimalist syntax is built on this purported model of case & agreement" to mean that the whole feature calculus of minimalist syntactic theory was built on this model (i.e. basically a generate and test by filtering out uninterpretable unchecked features model). Hence my response: no it's not; there are (I think many) versions of minimalist feature calculus which are built on a model that takes uninterpretability to be a triggering diacritic. And it's not just me. All of the Stabler stable work is like this. Chechetto and Donati's recent book is such a system. In fact, when people are careful about the feature calculus, they often end up saying that uninterpretability acts as an obligatory trigger for Agree.
DeleteOn a perhaps more interesting note, I think there is a way to bring together the intuition behind interpretability and the operations based model that you and I both favour. That is to think about the syntactic system as an optimisation of a computational device towards conditions that hold at the syn-sem interface. So one might take the diacritic to be an operational reification of an independently required notion of interpretability. This is speculative, I know, but I think that Chomsky, for example, has gestured at it in various places, and I discuss it in that paper I keep mentioning (or maybe in the one with Peter). Anyway, that view distinguishes the motivation for the diacritic from its internal function.
This comment has been removed by the author.
ReplyDeleteIt really did help a lot, my teacher gave me an assignment on the same topic so i was confused and don't know what to do then one of friend sent me this and said check it out, when i read it. I understand what to write so thanks for this article much appreciated.
ReplyDeleteI am really confused how to do an assignment but after reading your good article I am very excited to do my work. Denver website design service It helps me a lot.
ReplyDelete