Comments

Monday, April 10, 2017

A derivation "towards LF"? Hardly. (Lessons from the Definiteness Effect.)

A while ago, I took part in a very interesting discussion over on Linguistics Facebook. The discussion was initiated by Yimei Xiang, who was asking the linguistics hivemind for examples that demonstrate how semanticists and syntacticians approach certain problems differently. I chimed in with a suggestion or two; but the example that I find most compelling came from Brian Buccola, who brought up the Definiteness Effect.

At first approximation, the Definiteness Effect refers to the fact that low subjects in the expletive-associate construction allow only a subset of the determiners that are allowed in canonical subject position (the observation goes back to Milsark's 1974 dissertation):

(1) There was/were {a/some/several/*the/*every/*all} wolf/wolves in the garden.

Now, I must admit I'm not as familiar as I should be with the semantic literature on this topic. What I do know is that there is a long tradition (going back to Milsark himself) of attributing this effect in one way or another to "existential force." The idea is that sentences like (1) assert the existence of a wolf/wolves who satisfy the predicate in the garden, and that we should seek an explanation of the Definiteness Effect in terms of the (in)compatibility of the relevant determiners with this assertion of existence.

This is plainly wrong, as we will see shortly. But since this is an unusually narrow thing to be writing about on Norbert's blog, let me say a bit more about why I think this is an interesting/illuminating test case.

There's a persistent intuition, which has pervaded work within the Principles & Parameters / Government & Binding / Minimalist Program tradition, whereby syntax is a derivation "towards LF." In other words, insofar as syntax has a telos, that telos is assembling a structure to be handed off to (semantic) interpretation. The other interface, externalization to PF, is something of an "add-on." (See, for instance, this call for papers, which takes this (supposed) asymmetry between LF and PF as its point of departure.)

Now, the general claim that LF has a privileged role might be correct regardless of what we find out about the true nature of the Definiteness Effect in particular. But the only way I can envision reasoning about the general claim is by looking carefully at a series of test cases until a coherent picture emerges. With that in mind, it is interesting to consider the Definiteness Effect precisely because it looks, at first blush, like an instance where semantics is "driving the bus": a semantic property (the interaction of existential force with a certain class of determiners) dictates a syntactic property (where certain noun phrases can or cannot go). What I'd like to show you is that this is not actually how the Definiteness Effect works.

The crucial data come from Icelandic (I know, try to contain your shock). One important way in which Icelandic differs from English is that the element that will move to subject position, in the absence of an expletive or some other subject-position-filling element, is simply the structurally closest noun phrase – regardless of its case. To see why this matters, let's start with a sentence like (2). This sentence behaves the same in Icelandic as it does in English, but it forms the baseline for the critical case, later on.

(2) There seems to be {a/*the} wolf in the garden.

In ex. (2), the noun phrase [DET wolf] is part of the infinitival complement to seem, rather than in post-copular position as it is in (1). But the semantics-based explanation of the Definiteness Effect cannot afford to treat the similarity between (1) and (2) as a coincidence if it has any hope of remaining viable, and so whatever one says about "existential force" in (1) must extend to [DET wolf] in (2).

Now consider sentences of the form in (3), an English version of which is given in (4):

(3) EXPL seems [DET1 experiencer (dative)] to be [DET2 thing (nominative)] in the garden.

(4) There seems to the squirrels to be {a/*the} wolf in the garden.

The semantics-based explanation must now extend the same treatment to [DET2 wolf] in (4). But here's where things start to go awry. In Icelandic, it is not DET2 that is subject to the Definiteness Effect in a structure like (3); the restriction in Icelandic affects DET1, while DET2 can be whatever you want.

Here's some Icelandic data showing this, from Sigurðsson's (1989) dissertation:


In exx. (14a-b) we see that, in the absence of a dative experiencer, Icelandic behaves like English (cf. (2), above): the Definiteness Effect applies to the nominative subject of the embedded infinitive. In exx. (15a-b), however, we see that when there is a dative experiencer (mér me.DAT), it is the experiencer that is subject to the Definiteness Effect, whereas the nominative subject of the experiencer can now be definite (barnið child.the.NOM) even while remaining in its low position.

Where does this leave the semantics-based explanation? Insofar as "existential force" is responsible for the Definiteness Effect, it has to be the case that existential force shifts from the downstairs subject to the dative experiencer only when the experiencer is present (cf. (14b) vs. (15a)), and only in Icelandic (not in English; cf. (4) vs. (15a)). This seems to me like a reductio ad absurdum of the "existential force" approach.

There is a much simpler, syntax-based alternative to all of this. The Definiteness Effect can be accounted for if any DP headed by a strong determiner must attempt to move to subject position (even 'the garden' in (1-2, 4) must attempt to do so!). The rules on what can actually successfully move to subject position in different languages are different, and, consequently, the question of which noun phrases can and cannot be definite in-situ will have different answers in different languages, too.

Another thing to note is that the expletive plays no role, here. The ungrammatical variants of (1-2, 4, 14, 15), showing the Definiteness Effect, all have expletives in them. But if your language happens to allow other things – e.g. an adjunct like 'today' – to occupy the preverbal subject position, you can get the same effect with no expletive at all. Here's some more Icelandic data, this time from Thráinsson (2007), showing this:


On the syntactic approach to the Definiteness Effect sketched above, what these data mean is that adjuncts get to move to preverbal subject position only if no nominal has done so; meaning if there is a nominal that must attempt movement to subject position (as all nominals headed by strong determiners must do), and which is in a position where such an attempt would be successful (e.g. allir kettirnir all cats.the.NOM in (6.52d)) – it preempts any adjunct from being able to move there. Definiteness Effect sans expletives.

So the narrow take-home message is that the Definiteness Effect has nothing to do with "existential force." What does this mean for the relation between syntax and semantics? Obviously, weak and strong determiners do differ semantically; that is a truism. But a noun phrase will not exhibit the Definiteness Effect unless it is a position where it is a candidate for movement-to-subject. And which positions these are is a matter that is subject to morphosyntactic variation of a kind that has nothing to do with semantics. Basically, some noun phrases bear a diacritic that forces them to try to move to subject position; whether they bear this diacritic or not seems to be grounded in an interpretive property (strong vs. weak determiners); but whether they succeed in moving or not seems to have no effect on their interpretation (see, e.g., barnið child.the.NOM in (15a), happily interpreted as definite in its low position). The Definiteness Effect, then, is not about semantics except insofar as the presence of the diacritic [+must try to move to subject position] is semantically grounded. Hardly a derivation "towards LF"; the diacritic must be present on the relevant determiners to begin with, before (the relevant part of) the derivation even starts.

What are the consequences of this for the broader question concerning LF as the telos of the derivation? In one sense, not much: this is but one case study; so it turns out that the Definiteness Effect does not match the relevant profile of LF-as-telos. It just means one less entry in the relevant column. Not exactly earth-shattering, there. But in another sense, I think the profile of the Definiteness Effect is the norm, not the exception: syntax pays attention to some (interestingly, not all) semantic distinctions, but it in no way "serves" those distinctions. Definite noun phrases don't move "in order to achieve a definite interpretation" – they just move or don't move (perhaps in a way that depends on their definiteness and the morphosyntactic properties of the language in question), and then they are interpreted however they are interpreted, regardless of where they ended up. I've argued that the exact same thing is true of the relation between specificity and Object Shift. And I suspect that the same is true of almost every single case in which it looks like syntax "serves" interpretation: it is an illusion. There are certain syntactic features that are interpretively grounded (definiteness, specificity, plurality, person features, etc.); and these features can drive certain syntactic operations. But what the syntactic derivation is doing is not constructing a representation that more closely matches the target interpretation. It's doing its own thing. Sometimes this will line up with semantic properties of the target interpretation – like "existential force" – but in those narrow instances where it does, it's really something of an accident. The next time someone tells you that syntax is about constructing "meaning with sound," take it with a boulder of salt.

––––––––––––––––––––

UPDATE: As some commenters (esp. Ethan Poole over on facebook, and David Basilico down here in the comments) have pointed out, the data in (14-15) are confounded in some non-trivial ways. I was attempting to be cute and show that the essential observations have been around for close to 30 years – which I still believe to be true – but I now see that it would also have been helpful to include some less confounded data. In service of this, here is some data from my own 2014 monograph that hopefully demonstrates the same points more clearly:



13 comments:

  1. Some interesting discussions of this over on facebook. (I've made the post public, but you might still need a facebook account to see it...)

    See in particular the data from Vangsnes (2002) that Ethan Poole notes!

    ReplyDelete
  2. There's an unstated assumption/axiom in this argumentation that I think might be leading you to the wrong conclusion. You're assuming that our current theory of LF/interpretation/semantics (call it Theory L) is (more or less) correct. So what the data is testing is not the LF-as-telos theory (call it Theory S), but the conjunction of LF-as-telos and Theory L. Assuming the Definiteness Effect data is, in fact, incompatible with S&L, then we have two possible conclusions:
    1) Reject Theory S (your choice)
    2) Reject Theory L

    Personally, I think Theory L is on much shakier ground than Theory S and it deserves some rethinking.

    Obviously if we reject L we need to replace it with a new theory, just as rejecting S necessitates a new syntactic theory. Either choice opens up a whole new research programme with all sorts of interesting questions. For instance, if we assume syntax is "doing its own thing" we have to ask what that thing is and how it came about. Where did the features that drive syntax come from? Why are some interpretable? Is there a difference between semantic and non-semantic features? Are there linguistic expressions that are purely sound-meaning pairs, or do they all have some residue of syntax? And so on...

    ReplyDelete
    Replies
    1. @Dan: Yes, I agree with your characterization of the logical structure of the problem. But I have some antecedent reasons (that go unstated in this post) to opt for rejecting (what you call) S. For example, the interpretable/uninterpretable distinction, and the idea that all syntactic operations are "free" and their occurrence enforceable via "interface conditions" only, is untenable, for reasons having nothing to do with semantic interpretation. (See my 2014 monograph, and for a handy – but partial – cheat-sheet, see this handout.) So your suggestion that we reject (what you call) L is, from where I sit, an attempt to salvage something that doesn't work in the first place.

      Delete
    2. @Omer: I can't speak to the contents of your monograph, but the handout looks like it suffers from the same issue. It purports to be arguing against a theoretical statement (i.e. Syntax is driven by interface conditions) but in order to make the argument it brings in additional premises (both explicit (P1 and P2) and implicit (e.g. what is a primitive for a given module)) and shows that the data is incompatible with interface-driven-syntax+P1+P2+implicit-premises and conclude that syntax cannot be (solely) interface driven.
      This seems pretty sound, but the same data could probably be used to argue against any one or combination of your premises if we assume syntax is interface driven.

      Delete
    3. @Dan: What you're saying is true for every argument in the history of generative linguistics, as far as I can tell. If someone shows that X, Y, and Z together entail W, you can cling to W and assert that therefore, one of X, Y, and Z must be false. You can question the premises of any argument – actually, let me correct myself: you can question the premises of any argument that is explicit enough about its premises – so I'm not quite sure what your point is.

      Delete
    4. @Omer: I think "every argument in the history of generative linguistics" is a bit strong. I can think of two other forms of arguments off the top of my head:

      (1) Our theory says X. X doesn't account for data-point D. If we add Y to our theory, X&Y can account for D. What's more, X&Y accounts for data-point E. Therefore, let's add Y to our theory

      (2) Our theory says X&Y. W subsumes both X and Y. Therefore let's replace X&Y with W.

      If memory serves, (1) shows up in arguments in favour of the movement theory of control, and (2) is how Merge and Move were unified.

      The point of my response was that the antecedent reasons for rejecting LF-as-telos are on relatively shaky ground.

      The point of my original comment is in its last paragraph. Suppose we accept your rejection of LF-as-telos and interface-driven syntax. What next? What other parts of the theory are we forced to to reject? What does the new theory look like? Does it have any explanatory force? If generative linguistics is a science, it should progress, and I don't see where we would go from rejecting LF-as-telos.

      I'll admit that might be too much to expect from a blog post and a handout. I guess I should pick up your monograph.

      Delete
    5. Interesting pair of examples. (1) It is my impression that the Movement Theory of Control is empirically less successful than most of its competitors. (2) The unification of Merge and Move is nice, but doesn't work nearly as well as advertised (problems like, e.g., why lower copies/occurrences don't count as interveners, which ultimately require a reification of 'chain', which undoes much of the oomph of the unification in the first place).

      I suspect you and I have very different definitions of "explanatory force." I think a theory that cannot, given reasonable premises, account for very robust data, has no explanatory force at all. A theory like that becomes an exercise in philosophy, not linguistics. (I know Norbert differs with on this, and you may too. That's fine – I'm just stating my position.)

      If you have an argument against one of the relevant premises on its own terms, let's hear it. (Note: that it forces you to reject a conclusion you are fond of – such as interface-driven-syntax – is not an argument, in this sense.) Pending that, the onus is not on my to come up with "explanatory force"; the competitor has none.

      Delete
    6. Oh, and as for two logical structures of arguments you noted:

      (1) things like "X&Y can account for D" are seldom evaluable without some further premises concerning what it means to "account for D", and so, are susceptible to the same modus tolens maneuver

      (2) if W logically entails X and Y, then yes; but otherwise, what it means for W to "subsume" X and Y is laden with premises (cf. the debates on the Movement Theory of Control)

      So maybe there are premise-free arguments in linguistics, but you still haven't shown me one.

      Delete
    7. I'll take one more run at this.
      Suppose we accept your arguments here, in the handout, and in your monograph. What's next? Do we take the conjunction of all your premises as our new theory? What are the consequences? Is it any better than a theory based on SMT?

      Refuting a theory isn't that tough. Building a better theory is. As philosopher of science Imre Lakatos once wrote "All theories ... are born refuted and die refuted. But are they [all] equally good?"

      Delete
    8. @Dan: Yes, there is an alternative theory, based on the (old) idea that certain operations are syncategorematically triggered by the merger of certain heads, obligatorily and irrespective of the representational outcome. This is the alternative, SMT-free theory.

      Is it better than the SMT-based theory? Well, it is if your criterion is capturing the principles that underpin natural language. (What the handout shows is that the SMT-based theory, given certain fairly reasonable premises, cannot do this; for the alternative, positive proposal, you'll have to read the book.) So, no, it is not the case that "they [are all] equally good." At the same time, I readily admit that the theory I'm defending has a harder time, e.g., from the standpoint of "Darwin's Problem" – since it obviously enriches the amount of language-specific machinery.

      The way I understand minimalism – and the only way in which I can count myself as a minimalist, actually – is if the goal is to craft a theory that better meets these goals (the Beyond-Explanatory-Adequacy goals) while not forfeiting the ability to account for how natural language actually works. You can see the work we've been discussing as a demonstration of how a strict SMT-compliant theory has forfeited that ability.

      So now we (and I do mean both of us) have to choose: is accounting for natural language less important than "Darwin's Problem"-compliance, or more important? This is a methodological choice, not a principled one, since having both does not seem to be an option. But since our understanding of the lay of the land concerning "Darwin's Problem" is, comparably, a fuzzy heap of speculations (and Chomsky would be the first to tell you this), I am going to place my intellectual bet on the theory grounded in the domain we know something about. So, yes: a better theory.

      Delete
  3. I am unfamiliar with the syntactic analysis given by S as well as the Icelandic data so I am confused about some aspects of the first set of Icelandic data. The ungrammatical example (14c), which I assume to be, in English, 'the child was felt to be troublesome' does not behave like English at all, since it is marked as ungrammatical. But (14a), in which the aux 'be' seems to be missing in the upstairs clause, allows 'the child' as a subject, and it gets some sort of impersonal reading (with 'people'). So what is happening in the difference between (14a) and (14c)? Also, depending on where you want to have the base position for the subject, the example in (15a) would have would have the subject in a derived position (specifier of some TP) if we take that the subject starts out lower, in some sort of small clause complement to 'be'. Again, if this represents a derived position for the infinitival subject this is not possible in English--'*There seems a dog to be in the garden.' I'm not sure a Milsarkian based theory would have anything to say about these grammaticality contrasts.

    ReplyDelete
    Replies
    1. @David: Fair enough. I've added some more data to the post, following your comments (as well as Ethan Poole's). I hope these address your concerns.

      Delete
    2. Great. Thanks. I would be curious to see how the information structure people deal with this data.

      Delete