Thursday, May 14, 2015

Thomas' way too long comment on the Athens conference

This was originally meant as a comment on the vision statements for the Athens conference, but it just kept growing and growing so I decided to use my supreme contributor powers instead and post it directly to the blog (after an extended moment of hesitation because what I have to say probably won't win me any brownie points).

Since I'm presenting a poster at the conference, I've had access to the statements for a bit longer than the average FoL reader. And that's not a good thing, because they left me seriously worried about the future of the field.

Expected Scope


Before moving on, let me explain what I thought this workshop was supposed to achieve because it is easily possible that I erroneously assumed a larger scope than the organizers had in mind.

Whenever one talks about the future of a field, there are at least three separate levels at which the discussion can take place:
  1. Scientific Level
    How is the field doing according to scientific standards? What are the core insights, what's the next big research area? Are there important unresolved questions? Do we have to let go of assumptions that have proven a detriment to research progress?
  2. Institutional Level
    Does the scientific work rest on a firm base of institutional support? Is important work being published in the right venues, is there enough money to support research, are there enough new jobs for freshly minted PhDs?
  3. Sociological Level
    How is the field perceived outside its own narrow boundaries? Is there a lot of interest from other disciplines and the public at large? Is there a strong influx of fresh blood that will keep the field alive and vibrant for many years to come?
You might not agree 100% with this division, and it is of course an artificial divide since all those issues interact with each other. A field without wide recognition won't enjoy strong institutional support and thus is limited as to what kind of research can be done. At the same time, lackluster research will negatively impact a field's sociological status in the long run and thus propel it into a downward spiral from which it is very hard to recover. I am sure everybody in academia is aware of these dynamics, and the classification above is just meant as a useful approximation of the kind of things we should keep an eye on when talking about a field's future.

As far as I can tell, the organizers are in full agreement with this assessment since they mention all these issues in their mission statement under Goals and Rationale. In my humble opinion --- which is the opinion of a computational linguist, mind you, albeit one who keeps a close eye on syntactic research --- generative syntax in the Principles-and-Parameters tradition faces several tough challenges at all three levels. Some of them aren't even particularly pressing at this point, they might not even reach any degree of wide awareness until 20, 25 years from now. But they will become problematic for the field within a few decades, so they should be part of a conference that's called Generative Syntax in the Twenty-first Century.1

The vision statements suggest that this won't be the case: they talk about the here and now, and maybe the immediate future. But none of them explore a time horizon that goes much beyond the next ten to fifteen years. Fifteen years from today, I will still be at least twenty-five years away from retirement. If the shit hits the fan 25 years from now, I will be personally affected by that as a fellow traveler. So it is in my own interest to think bigger than that.

The Real World


Many (but far from all) of my concrete worries stem from the sociological and institutional developments in academia at large and how generative syntax in particular has neglected to move into a position where it can safely stem the tide. I do not have to spend many words on what those developments are (declining public funding, focus on marketable research, an economically squeezed middle class moving to degrees with safe employment opportunities), nor why theoretical work is particularly affected.

The curious thing is that linguistics is actually in an excellent position when it comes to applicable research and job prospects thanks to the boom of computational linguistics and natural language processing. Phonology is ready and waiting to take advantage of this. The arrival of Optimality Theory in the mid 90s led to a profound rethinking of what phonology is all about, and that has resulted in a lot of research that focuses on computational learning models and stochastic formalisms. So phonology PhDs now have to know a fair bit of statistics and machine learning, and they often also have a decent background in R and mid-tier programming languages like Python. This makes them great job candidates for companies like Nuance --- not as software engineers, but as part of a rapid prototyping team (i.e. they have to come up with new solutions to a problem and make it work on a computer, but efficient scalable code is not much of a concern).

Needless to say, Minimalism did not bring about such changes, and I can't shake the suspicion that many in the field see that as a good thing. For example, there never was a push to win back any ground that was lost to TAG, HPSG and CCG in the NLP community, and the general sentiment seems to be that this isn't worth any attention anyways because pure science doesn't meddle in such applied affairs. That strikes me as shortsighted in at least three respects.
  • First, it assumes that practical concerns have no overlap with theoretical concerns.
  • Second, it ignores that many interesting problems arise from the needs of large-scale applications, problems that would have been considered trivial otherwise. Henk van Riemsdijk has a related remark in his statement where he points out that the set of problems worked on in Minimalism is too small and homogeneous.
  • Third, it means that all these competing frameworks now have valuable resources like annotated corpora, machine learning algorithms, functional parsers, and perhaps most importantly, wide exposure and an active research community outside linguistics.
Just to be clear, I am not saying that all of transformational syntax should suddenly be about learning probabilistic grammars that model the usage of demonstrative them. I'm not even saying that this kind of work should be part of more vanilla conferences like NELS. But the fact that there is so little interest in these questions, no attempts at widening the set of licit tools and explanations, a complete omission of such work in the scientific discourse as well as our class rooms, this fact will have a very negative impact in the long run. The institutional livelihood of generative syntax depends on regaining influence with the applied crowd.

The same is true on the cognitive side. For a precious short moment in history, generative syntax was the crown jewel of cognitive science, but that position was quickly lost and attempts to win it back have hardly moved beyond blanket statements that we are right and they better change their methodology, their goals, their objects of study, and their research questions. Few attempts to find common ground, few compromises, my way or the highway. Yes, I'm painting in very broad strokes here, and you might object that psycholinguists have spent a lot of time and effort on collaborations. True, but also telling that the outreach is being outsourced to a different subfield.

Keep in mind, we are talking about generative syntax here, not linguistics as a whole (though some of the challenges obviously expand beyond syntax). And generative syntax still works with exactly the same cognitive assumptions as 50 years ago. There isn't anything like a principled research program to substantiate syntactic claims about computational efficiency via cognitive theories of memory, something that one might expect to be a natural outgrowth of early Minimalism's focus on economy conditions. Again, OT brought about a major rethinking of goals and methodology in phonology that opened up the field to a lot of bleeding edge ideas from neighboring disciplines. Syntax never put itself under the knife like that, and as a result it seems out of step with a lot of neighboring fields nowadays.

The last point, by the way, is a common view among grad students based on my own, still very recent experiences. While phonologists get to program, design learners, compare OT and Harmonic Grammar, or run artifical language experiments, syntacticians sit down with pen and paper and draw some trees. While semanticists only need to worry about developing an account that gets the facts right, syntacticians play a game with much more elaborate rules where your account has to live up to seemingly arbitrary standards of explanatory adequacy. Just to be clear, that's not how I see things, but it is a common perception among grad students. Like all young people, grad students like exciting new things, and syntax comes across as fairly old-fashioned. The same old questions, the same old answers, nothing new on the horizon. This makes syntax less attractive, and in combination with the other factors named above --- foremost employment opportunities --- it creates a real risk that fewer and fewer students will choose the path of generative syntax and the field will collapse from attrition.

Minimalism needs to open up and become a lot more pluralistic. In fact, that's the whole point of the Minimalist program, isn't it? So why does a community, when given such an amount of freedom, come up with theories that all look so similar? You would expect at least a couple oddballs out there to propose something radically different. As Henk van Riemsdijk notes, even a minor extension like grafts is considered highly suspect. I'm sure some of you will say that the proposals look similar because they all build on a shared body of firmly established knowledge. But knowledge can be encoded in many different ways, for instance by switching from phrase structure trees to derivation trees, yet everybody sticks to the same encodings. And more importantly, of course all accounts share the same body of results if they hardly ever venture out of the same old theme park of generative syntax problems.

So, to sum up what has been said so far, transformational syntax has lost ground in various fields over the last 30 years (+/- 10), and I don't see any efforts to reclaim it. If nothing is done, funding will eventually dry up, students will turn to other fields, and competing formalisms will fill the power vacuum.

Alright, with the pesky real world issues out of the way, let's look at purely scientific issues. That should make for a more relaxed reading.

Rethinking Syntactic Inquiry


There are several aspects of how syntactic research is done nowadays that strike me as a serious impediment to scientific progress. In one sentence, it could be described as a hypertrophic focus on implementation at the expense of abstraction.

Why Implementation?

We all know Marr's three levels of description: The computational level provides a specification of the problem and a procedure for determining the solution, the algorithmic level gives a computable implementation of this procedure, and the hardware level describes the physical instantiation patterns of running this algorithm. A priori, one would expect generative syntax to operate at the computational level since it is unclear what additional insight the algorithmic level could provide for questions like Plato's problem or Darwin's problem. Yet most of syntactic discourse is about matters of implementation: Derivations or representations, features or constraints, copies or multi-dominance, what kind of labels, agree or covert movement. None of these make any difference for what kind of computations the system is capable of; the proofs for that are myriad and have been discussed many times before. So why is so much attention devoted to these issues?

I believe this to be a result of multiple interacting factors. One is the idea that these distinctions matter because even if we can freely translate between them, one of them provides a more natural or elegant perspective. That's a valid position in principle, but if naturalness plays such a crucial role in the rise and fall of syntactic proposals, why hasn't there been a principled attempt to formulate a metric of naturalness and use that to adjudicate these questions? Presumably because whenever such things have been tried (e.g. in SPE), they have failed miserably. But one person's modus ponens is another person's modus tollens, so instead of concluding that there is no reason for syntacticians to distinguish between equivalent implementations, one opts instead for a reading where the right metric of naturalness is too difficult to formalize and plays it by ear.

Somewhat related to this position is the sentiment that the answer isn't as important as the question. Ultimately we do not care whether we have features or constraints, the important thing is that the discussion has revealed new data or empirical generalizations. That's certainly a good thing, but the worst enemy of a good thing is a better thing. And I can't help but feel that we could have a more efficient debate if we omitted the implementation back-and-forth, accepted either view as valid instead, and used them as prediction generators as we see fit. Just a little dose of Feyerabend to take out the unnecessary stress and keep the blood pressure low.

Another reason for the ubiquity of fine-grained implementations with very specific assumptions is what I call the view of syntactic theory as a close meshed net. Imagine all logically possible pieces of data laid out in front of you in a three-dimensional vector space, like fish in a vast ocean. You now get out your theory and cast it like a net, trying to catch all the fish you want, none of the other fish. Although the net has some flexibility, you can only fit so much in it. If you want to catch a particular fish, that means another fish is out of reach and hence automatically ruled out. The tighter your net, the less flexibility you have and the more things are out of reach. That is exactly the kind of result syntacticians like: what you have found out about relativization domains for agreement in your investigation of the PCC automatically explains why you can't have inflected complementizers in English (made-up example). By giving precise implementations, you make very specific predictions for grammaticality in completely unrelated domains.

Net syntax is certainly very elegant when it works. The problem is that it is incredibly risky, and there is no safe strategy for recovering from failure. It is risky because the more specific your assumption, the more likely it is to be wrong. And it is unsafe because failure can go in two directions --- overgeneration and undergeneration --- and neither direction comes with a general-purpose backoff strategy. If you have a machine with a myriad of fine-tuned knobs that spits out a wrong result for a specific data point in a specific language, you'll have a hard time determining which knob to turn to fix that, and you'll have no guarantee that turning that knob didn't break something somewhere else. This makes net syntax a non-monotonic mode of inquiry.

I am fond of a much less ambitious alternative: only say true things. That's not a moral dictum to avoid lies, it means that you should go for claims that are easily verified to be true for all the known data, are sufficiently general to have a high chance of withstanding new data, and can be revised without losing the original claim. So instead of outlining a specific system of agreement for the Person Case Constraint, for example, you would simply posit a few assumptions that you need to get the specific data points, and leave the rest open. Don't posit a specific encoding for your assumptions, because that doesn't matter for your analysis --- all we need to know is that the assumptions can be encoded within the agreed-upon bounds. If you tighten the assumptions any further to rule out overgeneration, make sure that we can easily back off to a more relaxed model again. In a word, your proposal doesn't represent a single theory, it is a class of theories such that at least one of them is the empirically correct one. Your job is no longer to propose the one and only right account, no, you now have to narrow down the space of possible accounts without accidentally ruling out the correct one.

Those of you who are familiar with mathematical inquiry won't be particularly surprised by my proposal. In mathematics, it is fairly common to first study the properties of a general object like a group and then move to more specific ones like a ring, or drop some assumptions to get a more general object, e.g. a monoid. What you do not do is jump from groups directly to semirings because there is precious little that carries over from one to the other. With net syntax, you never quite know what kind of jump you are making as the consequences of minor alterations are hard to predict by design. That is not a good recipe for steady progress.

Why Structure?

I feel like I have already rustled plenty of jimmies at this point, so I might just as well go all in: why do we care about structure?

I do not mean this in the sense that sentences should be viewed as strings, quite the opposite, we definitely want tree structures for syntax. What I am wondering is why we should care about the minute details of the structures underlying specific constructions.

As Norbert likes to emphasize, we are linguists, not languists. So the structures are of interest only to the extent that they reveal properties of the faculty of language. Consider for instance the NP-DP debate. Some formalisms favor NPs, some DPs. But why does this matter? They are rather simple permutations of each other and cannot be distinguished on the grounds of well-formedness data. Given what we know about syntax and formal language theory so far, it makes no difference for the complexity of the computations involved in language. It has no effect on parsing, and none on learnability --- again, as far as we know at this point. So sticking with the mantra of only saying true things, shouldn't we simply leave this issue open instead until we find some way of differentiating between the two? What could we possibly gain from picking either one?

Even more radically, what do we gain from assuming that it is always the same structure? Maybe these things are even underspecified for the learner and it never makes a decision. As far as I can tell, the only advantage of picking exactly one structure is that it makes it easier to compute the predictions of specific assumptions --- but that just raises the question why syntacticians are still doing these things by hand while phonologists do the heavy lifting with the aid of tools like OT-Soft.

There might of course be cases where a clear complexity difference arises, where one analysis is much simpler than the other. But those are few and far between, and I don't understand why we should overcommit ourselves in all the other cases. If you can keep your assumptions minimal, keep them minimal. Don't assume what you don't need to assume.

I have the feeling that this ties back to the points about implementation and why it is such a common thing in syntax. There is this desire to get things exactly right, and you cannot do that if you deliberately leave things open. Personally, I think it is much more important not to get things wrong. This seems like an innocent distinction, but I think it is one of the main reasons why generative syntax is done the way it is done, and it does not strike me as a healthy way of doing things.

Wrapping Up


There's a couple more things I wanted to write about, for instance how syntax is still taught in a way that preserves the status quo rather than training multi-disciplinary students that can, say, design UG-based learners (yet at the same time syntax students also hear less and less about the history of the field and many aren't even taught GB, rendering 15 years of research inaccessible to them). And my more relaxed stance towards structure ties into a hunch of mine that the future of syntax will involve a lot of computer tools that automatically infer possible tree structures, simply because the truly interesting questions don't need more precision than that. I also wanted to argue that syntacticians are selling themselves short by focusing all their attention on language when their skill sets can be helpful in any area that involves inferring hidden structure from sequences of symbols, in particular in biology. This is also a scientifically relevant question, for if we find the same dependencies in syntax and, say, protein folding, doesn't that suggest a third factor at play? But frankly, I'm too exhausted at this point..

Let me be clear that this post is not meant as a devious attempt of mine to rain on the generative syntax parade. I have honest concerns about the health of the field and can't shake the feeling that it has been stuck in place for years now, with a lot more interesting things happening in other parts of the linguistic community. Of course I might be completely mistaken in my criticism, my suggested solutions, or both. I would actually love to see a well-reasoned argument that dispels all my worries. But for that they first have to be part of the discussion; some of the points I made were briefly mentioned in some vision statements, many weren't. We'll see whether I'm a happy camper after the Athens workshop.


  1. But see this post by Peter Svenonius that the organizers were really just thinking in terms of the next 15 years. Fair enough, it probably makes sense to keep the discussion focused. But as I explain above, that's not a perspective anybody my age should be satisfied with.

21 comments:

  1. Hi Thomas,

    Thanks for the thought-provoking post. There's a lot to chew on, here, so the following is not meant to be exhaustive –

    1. It seems that you see an in-principle distinction between overgeneration and undergeneration (or, to use non-linguistic terminology, between false negatives and false positives), and I'm not sure where that distinction comes from.

    2. I fear there is a conflict between your "say only true things" methodology and how some of the most exciting discoveries in the history of our field were made. Take for instance Ross' discovery of island effects, perhaps the single most exciting discovery in the history of generative syntax (a strange proclamation on my part, since this is a matter of taste, but...). Wouldn't the approach you're advocating lead us to say something like "grammatical questions are formed by moving the wh-phrase to sentence-initial / COMP / Spec,CP position (in English)" and, crucially, lead us to stop there? After all, there are very few (if any) characterizations of what islandhood is about that don't rule out some fish that you actually want to catch [though, interestingly enough, these are all wh-argument-fish; the regular net looks like it's perfect when it comes to wh-adverb-fish]. So, we would have to refrain from saying much at all about islands, save for maybe "there are some." The road is short from this to: "Language is generated by a mildly-context-sensitive grammar. There – solved it!" The thing is, I know you (Thomas) don't subscribe to the kind of reductionism embodied by this caricature-quote. So I must be missing something; what is it?

    ReplyDelete
    Replies
    1. 1. There's at least two reasons why overgeneration is less of an issue.

      A) On a scientific level, overgeneration might actually be the right thing: the grammar formalism carves out a class of grammars, but some of them might not be learnable, others cannot be processed efficiently, and others aren't diachronically stable enough to ever "infect" a speech community. This kind of factorization isn't available with undergeneration, there the grammar clearly has a problem.

      B) Pragmatically, many useful properties are downward entailing and thus carry over from a big class to a smaller subclass. For instance, if you have a parser for class X, then it will also work for any subclass of X. If it turns out that language isn't just in class X, but a subclass thereof, you don't have to throw away your parser. Upward entailing properties tend to be negative properties like undecidability, which do not engender a lot of productive work.

      2. First the weak rebuttal: I'm not positing Only say true things as a dogma one always has to adhere to. Scientific inquiry is too complex a process that it could easily be broken down into a single method --- as indicated by the Feyerabend reference, I think we should try as many methods as possible. So besides Only say true things, there's also principles like Be bold (given a choice between two claims that cover the same observed data, go for the more restrictive one), Factorize (break complx problems down into simple ones), and Something is better than nothing (if a problem is too complex, ignore the complex stuff and focus on a simple subproblem). We all put slightly different weights on those, and that's why we all have slightly different views of what constitutes good research. I did not mention those other principles because I feel that they are already very popular in linguistics. Only say true things, on the other hand, is not a common strategy, and it is not clear to me why.

      Alright, now for the stronger point: I disagree with the presupposition that we cannot say much about islands under Only say true things.
      Here's a sketch of what a possible line of inquiry would look like:

      i) Is it unsurprising that islands exist? That is to say, is a system that can compute movement dependencies also capable of enforcing island conditions? (the answer seems to be yes)
      ii) Do we find all logically possible islands? (the answer is no)
      iii) What islands do we find, and what types of islands are there (e.g. strong vs weak)?
      iv) How can we unify all these types under a general definition?
      v) Within this newly defined class of islands, do some islands form natural subclasses? What property can be used to single them out?
      vi) What do these properties correspond to on a computational level?

      We do not need to posit a full list of islands, nor a mechanism for creating islands. Instead we try to develop a theoretical object Island that is big enough to accommodate all the empirically attested islands, and at the same time rules out a significant chunk of the logically possible but unattested islands.

      If you want a more concrete example, I have a paper that derives islandhood from optionality, which can be used to unify the Adjunct Island Constraint and the Coordinate Structure Constraint while still allowing for parasitic gaps and across the board movement (the CSC part is not in the paper). It does not assume anything about the grammar except that displacement involves a dependency at the target site. With that, island status is a mathematical corollary of optionality, so the question is no longer why the AIC and the CSC hold, but why there are certain exceptions to them.

      Delete
  2. Maybe "there are some (islands)" is all that could responsibly be said at the moment, specially given what at least some people suggest: that "none of the syntactic environments that have been claimed to prohibit extraction are impermeable to extraction (Chaves 2012)". That this is true is, of course, arguable, but we will never know if syntacticians are unwilling to look into it from other perspectives (At least minimalist syntacticians. The guy I quoted works in HPSG).

    ReplyDelete
    Replies
    1. @Klara: The assertion that there are exceptions to every island constraint, even if true, misses an important point. As David Pesetsky has pointed out on this very blog, the classical descriptions of syntactic islands are effectively exceptionless if the element being extracted is why or how. So even if Chaves were technically right (and I'm not sure that's the case), it would be precisely the kind of too-coarse description that I was concerned about in my comment to Thomas.

      Second, while I am no expert on this, the work of Sprouse et al. seems to suggest that the idea that islands arise as the result of processing difficulty – which, I take it, is Chaves' argument – is on very shaky ground.

      Finally, I reject the idea that syntacticians, minimalists in particular, are unwilling to look at islandhood from other perspectives. The whole point of minimalism is to reduce as much language-specific "stuff" to extra-linguistic factors. But such attempts need to be held responsible to the facts they are attempting to reduce, and this includes what happens with why and how.

      Delete
    2. @Omer: I pretty much agree with everything in your comment, but I think there is one point that should be probed a little more, and that's the status of why and how with respect to islands. In isolation, the fact that they can never be extracted is rather unremarkable --- maybe that's just an idiosyncracy of certain languages like English. After all the null hypothesis is that every grammar carved out by your formalism is a possible natural language, and it is very easy to write grammars that block extraction for specific lexical items. Also, I believe there are languages that allow both arguments and adjuncts to be extracted from islands, no? So if that were the complete picture, I would say "move on, there's nothing to see here".

      But crucially that is not the complete picture because we do not seem to find "reverse islands", i.e. islands that are opaque to arguments but not to adjuncts. That is a puzzling universal, something we would not expect under the view that all logically possible island types exist in some language. So if your class of grammars cannot explain the absence of reverse islands, it is missing an important generalization.

      Delete
    3. What are the languages that allow adjuncts to be extracted from islands?

      Delete
    4. @Alex: I think you could count even English since extraction of when, for instance, is sometimes claimed to be fairly acceptable. And the effect is even more pronounced with where (data from the Szabolcsi & den Dikken survey paper):

      1a) When did John ask whether to do this _?
      1b) Where did John ask whether to read this book _?

      My impression was that this is even more pronounced in other languages, but I can't find anything in my notes, so I might be imagining things.

      On a more facetious note, it's a trivial truth that many languages have islands that both arguments and adjuncts can be extracted from, we just call them non-islands ;)

      Delete
    5. Is that really what you had in mind? 'Whether' never generates very strong island effects, and those two examples are both pretty clearly degraded on the intended interpretations. In any case, you seemed to be suggesting that there were languages where extraction of adjuncts out of islands is systematically permitted (and not just in the cases where it might be marginally acceptable in English). If there are, I'd certainly be interested to know about it.

      Delete
    6. Also, the paper you link to assigns ?? to (1a) and ? to (1b), which is worth noting.

      Delete
    7. I'm not quite sure what you have in mind. If extraction were systematically permitted, the construction would not be classified as an island in that language. Are you asking whether there are languages where the analogue of an English island is not an island? That's not unheard of, not all languages have the same islands, at least not obviously, see e.g. extraction from relative clauses in mainland Scandinavian (I know, I know, that might involve unpronounced resumptive pronouns, but that's hardly a settled issue).

      There might be a base stock of islands you find everywhere (modulo some minor differences with respect to finiteness, event structure, etc), and if so, I'll gladly add it to the list of puzzling typological gaps. That said, I actually don't know of any broad typological surveys of that kind, so any pointers would be greatly appreciated.

      Delete
    8. I don't think the terminological fussing helps here. The question is simply whether there are languages where you can say something like the following (on the interpretation where it's a question about methods of introduction):

      How did you ask who introduced Mary?

      Earlier, you appeared to be hinting that there were such languages, so I assumed that you would have some kind of handle on how to formulate the relevant questions. I don't personally see any difficulty in doing so. Either there are languages where adjuncts can be extracted from most of the configurations that are islands for adjunct extraction in languages such as English, or there aren't. If there are, then I think that would be interesting and worth knowing about.

      Notice that the question is just a request for potentially interesting data, and as such doesn't have to be stated with full formal precision. It's like asking "Hey, do you know of any languages with object agreement?" The question can potentially be given a useful answer even if it may be a tricky thing indeed to give a precise definition of "object agreement".

      Delete
    9. I don't know a specific case like that, and it's not what I had in mind with my remark, that's why I didn't get what kind of data you are asking for. For English, the closest might be how are you wondering whether to solve the problem _, which is said to be fine if some set of choices is fixed by the context. But since you consider whether islands unrepresentative the example doesn't qualify, I guess.

      Delete
    10. Well, your comment seemed to refer to languages other than English that "allow both adjuncts and arguments to be extracted from islands". I wasn't aware of any such languages, which is why I was interested to know which languages you had in mind. (The fact that I am not aware of any such languages means very little, since I have zero typological knowledge.)

      If we're just looking at the English data, then as far as I can see, extraction of an adjunct out of even a 'whether' island virtually always makes a sentence perceptibly worse than it's non-island-violating counterpart:

      ??How are you wondering whether to solve the problem?
      How are you planning to solve the problem.

      There might indeed be some interesting generalizations regarding which adjuncts are easier to extract out of islands of various sorts, but in English at least, the island always seems to make itself felt in the case of adjunct extraction.

      Delete
    11. The claim wasn't intended to be about languages but about island types. The logically possible types can be classified according to what can be extracted:

      A) nothing
      B) only (certain) arguments
      C) only (certain) adjuncts
      D) both (trivially true if we're not dealing with an island, but also seems to hold for some island types)

      C is unattested in any language as far as I know, while the other types seem fairly common. That said, I was treating D as attested just based on the fact that adjunct extraction from islands has been discussed in the literature. Now that you've pushed me on the examples, the ones I can find are indeed all marked, so maybe if one starts counting special cases like that, A (and possibly even B) might not exist after all. Is there anything like a 100% exceptionless island?

      Delete
  3. This comment has been removed by the author.

    ReplyDelete
  4. Just to add a voice in response to Thomas Graf's comment on the Athens Conference (I have not read all of the subsequent discussion): as one of the invited ancient ones, I have to say that I am not at all offended by the sentiments expressed above. I find myself entirely sympathetic. I am in general frustrated with the minimalist syntax net, where there are so many issues at a coarser grain that we have no good theories about yet. I feel like there is a low level toolbox of assumptions that I don't actually agree with, but which I am not allowed to undo or ignore without being left out of the conversation, intellectually speaking. In this sense, I think I differ at some kind of gut level from my friend and colleague David Adger. But I do always have the best kinds of syntactic arguments with him, and we agree on a lot of things I consider really important. I think this blog and these conversations are an excellent preparation for the event. Looking forward to Athens. ( There may be shouting. Much shouting. :)).

    ReplyDelete
  5. Ah, sorry I won't be there to shout with you, Gillian!

    ReplyDelete
  6. terimakasih infonya sangat bagus dan menarik, sukses buat pak admin.
    Obat Herbal Tukak Lambung

    ReplyDelete
  7. This comment has been removed by a blog administrator.

    ReplyDelete
  8. Difficult to admit but I feel a little bit the same about Athens. I think summer is not the best time to visit if you do not like heat and crowds. For me, Athens is good for foodies. The food is good and cheap. I am not that into shopping but I found good, unique things in there. In addition, I think the city will be better appreciated with a local. Hotels in Athens

    ReplyDelete
  9. Presenting in English – Munich
    Get Online Presentations Course in English – Munich, Germany. Presenting in English teaches students how to become successful presenters at conferences or meetings.

    ReplyDelete