It’s that time of year: spring has sprung, classes are almost over, and all of those commitments you made to write papers three years ago and forgot about are coming due. I am in the midst of one such effort right now (due the end of May). It’s one of those “compare different theories/frameworks” volumes and I have been asked to write on the Minimalist Program (MP). After ignoring the project for a good long time, I initially bridled against the fact that I had accepted to write anything. In order to extricate myself from the promise, I tried to convince the editor that the premise of the volume (that MP was a theory like the others) was false and so a paper on MP would not really be apposite. This tantrum was rejected. I then sulked. Finally, I decided that I would take the bull by the horns and argue that MP, contrary to what I perceive to be the conventional view, has been wildly successful in its own terms and that the reason for its widespread perceived failure is that most critics have refused to accept the premises of MP investigation. Why would they do so? There are several reasons, but the best one (and one that might even be correct) is that the premises for MP investigation (viz. that we know something about the structure of FL/UG and that something resembles GB) are shaky and so the project is premature. On this view the program is fine, it’s just that we’ve gotten a little ahead of ourselves.
This objection should sound familiar. It is what people who study specific languages and their Gs say about claims about FL and UG. We don’t know enough yet about particular Gs to address questions about FL/UG. Things are more complicated and we need time to sort these out.
I reject this. Things are always more complicated. Time is never right. IMO, GB is a pretty good theory and it is worth trying to see if we can derive some of its features in a more principled way. We will learn something even if we are not completely right about this (which is surely the case). In other words, GB is right enough (or, many of its properties will be part of whatever description turns out to be more accurate) and so trying to see how to derive its properties is a worthwhile project that could tech us something about FL/UG.
This, I should add, is the best reason to demur about MP (and as you can see, I am not sympathetic). Two others spring to mind: (i) MP sharpens the linguistics/languistics kulturkampf and (ii) MP privileges a kind of research that is qualitatively different from what most professionals commonly produce and so is suspect.
I have beaten both these drums in the past, and I do so again here. I have convinced myself that the biggest practical problem for MP work is that it sharpens the contrast between the bio/cog and the philological perspectives on language. More specifically, MP only makes sense from the bio/cog perspective as it takes FL/UG as the object of inquiry. FL/UG is the explanandum. If you don’t think FL/UG exists (or you are not really interested in whether it exists) then MP will seem, at best, pointless and, at worst, mystical omphaloskepsis. It is an odd fact of life that many find their own interests threatened by those that do not share them. I suspect that MP’s greatest sin in the eyes of many is that it appears to devalue their own interest in language by promoting the study of the underlying faculty. This, of course, does not follow. Tastes differ, interests range. But there can be little doubt that one of Chomsky’s many vices is that by convincing so many to be fascinated by the problems he has identified that he has robbed so many of confidence in their own. MP simply sharpens: doing it at all means buying into the bio-cog program. Abandon hope all languists who enter here.
Second, furthering the MP project will privilege a kind of work distinct in style from that normally practiced. If the aim is unification then MP work will necessarily be quite theoretical and the relevance of this kind of work for the kinds of language facts that linguists prize somewhat remote, at least initially. Why? Because if a primary aim of MP is to deduce the basic features of GB from more fundamental principles then a good chunk of the hard work will be to propose such principles and see how to deduce the particular properties of GB from them. The work, in other words, will be analytic and deductive rather than descriptive and inductive. Need I mention again how little our community of scholars esteems such work?
I we put these two features of MP inquiry together, we end up with work that is hard core bio-mentalist and heavily deductive and theoretical in nature. Each feature suffices to generate skepticism (if not contempt) among many working linguists. This, at any rate, is what I argue in the paper that I avoided trying to write.
I cannot post the whole thing (or at least won’t do so today). But I am going to given you the intro stage setting (i.e. polemical) bits for your amusement. Here goes, and may you have a happy time with your own thoughtless commitments.
What is linguistics about? What is its subject matter? Here are two views.
One standard answer is “language.” Call this the “languistic (LANG) perspective.” Languists understand the aim of a theory of grammar to describe the properties of different languages and identify the common properties they share. Languists frequently observe that there are very few properties that all languages have in common. Indeed, in my experience, the LANG view is that there are almost no language universals that hold without exception and that languages can and do vary arbitrarily and limitlessly. LANGers assume that if there are universals, then they are of the Greenbergian variety, more often statistical tendencies than categorical absolutes.
There is a second answer to the question, one associated with Chomsky and the tradition in Generative Grammar (GG) his work initiated. Call this the linguistic (LING) perspective.” Until very recently, linguists have understood grammatical theory to have a pair of related objectives: (i) to describe the mental capacities of a native speaker of a particular language L (e.g. English) and (ii) to describe the meta-capacity that allows any human to acquire the mental capacities underlying a native speaker facility in a particular L (i.e. the meta-capacity required to acquire a particular G). LINGers, in other words, take the object of study to be two kinds of mental states, one that grammars of particular languages (i.e. GL) describe and one that “Universal Grammar” (UG) describes. UG, then, names not Greenbergian generalizations about languages but features of human mental capacity that enable them to acquire GLs. For linguists, the study of languages and their intricate properties is useful exactly to the degree that it sheds light on both of these mental capacities. As luck would have it, studying the products of these mental capacities (both at the G and UG level) provides a good window on these capacities.
The LANG vs LING perspectives lead to different research programs based on different ontological assumptions. LANGers take language to be primary and grammar secondary. GLs are (at best) generalizations over regularities found in a language (often a more or less extensive corpus or lists of “grammaticality” judgments serving as proxy). For LINGers, GLs are more real than the linguistic objects they generate, the latter being an accidental sampling from an effectively infinite set of possible legitimate objects. On this view, the aim of a theory of a GL is, in the first instance, to describe the actual mental state of a native speaker of L and thereby to indirectly circumscribe the possible legit objects of L. So for LINGers, the mental state comes first (it is more ontologically basic), the linguistic objects are its products and the etiology of those that publically arise (are elicited in some way) only partially reflect the more stable, real, underlying mental capacity. Put another way, the products are interaction effects of various capacities and the visible products of these capacities are the combination of their adventitious complex interaction. So the products are “accidental” in a way that the underlying capacities are not.
LANGers disagree. For them the linguistic objects (be they judgments, corpora, reaction times) come first, GLs being inductions or “smoothed” summaries of these more basic data. For LINGers the relation of a GL to its products is like the relation between a function and its values. For a LANGer it is more like the relation between a scatter plot and the smoothed distributions that approximate it (e.g. a normal distribution).
LINGers go further: even GLs are not that real. They are less real than UG, the meta-capacity that allows humans to acquire GLs. Why is UG more “real” than GLs? Because in a sense that we all understand, native speakers only accidentally speak the language they are native in. Basically, it is a truism universally acknowledged that any kid could have been native in any language. If this is true (and it is, really), then the fact that a particular person is natively proficient in a particular language is a historical accident. Indeed, just like the visible products of a GL result from a complex interaction of many more basic sub-capacities, a particular individual’s GL is also the product of many interacting mental modules (memory size, attention, the particular data mix a child is exposed to and “ingests,” socio-economic status, the number of hugs and more). In this sense, every GL is the product of a combination of accidental factors and adventitious associated capacities and the meta-capacity for building GLs that humans as a species come equipped with.
If this is right, then there is no principled explanation for why it is that Norbert Hornstein (NH) is a linguistically competent speaker of Montreal English. He just happened to grow up on the West Island of that great metropolis. Had NH grown up in the East End of London he would have been natively proficient in another “dialect” of English and had NH been raised in Beijing then he would have been natively proficient in Mandarin. In this very clear sense, then, NH is only accidentally a native speaker of the language he actually speaks (i.e. has acquired the particular grammatical sense (i.e. GL) he actually has) though it is no accident that he speaks some native language. At least not a biological accident for NH is the type of animal that would acquire some GL as a normal matter of course (e.g. absent pathological conditions) if not raised in feral isolation. Thus, NH is a native speaker of some language as a matter of biological necessity. NH comes equipped with a meta-capacity to acquire GLs in virtue of the fact that he is human and it is biologically endemic to humans to have this meta-capacity. If we call this meta-capacity the Faculty of Language (FL), then humans necessarily have an FL and necessarily have UG, as the latter is just a description of FL’s properties. Thus, what is most real about language is that any human can acquire the GL of any L as easily as it can acquire any other. A fundamental aim of linguistic theory is to explain how this is possible by describing the fine structure of the meta-capacity (i.e. by outlining a detailed description of FL’s UG properties).
Before moving on, it is worth observing that despite their different interests LINGers and LANGers can co-exist (and have co-existed) quite happily and they can fruitfully interact on many different projects. The default assumption among LINGers is that currently the best way to study GLs is to study its products as they are used/queried. Thus, a very useful way of limning the fine structure of a particular GL is to study the expressions of these GLs. In fact, currently, some of the best evidence concerning GLs comes from how native speakers use GLs to produce, parse and judge linguistic artifacts (e.g. sentences). Thus, LINGers, like LANGers, will be interested in what native speakers say and what they say about what they say. This will be a common focus of interest and cross talk can be productive.
Similarly, seeing how GLs vary can also inform one’s views about the fine structure of FL/UG. Thus both LINGers and LANGers will be interested in comparing GLs to see what, if any, commonalities they enjoy. There may be important differences in how LINGers and LANGers approach the study of these commonalities, but at least in principle, the subject matter can be shard to the benefit of each. And, as a matter of fact, until the Minimalist Program (MP) arose, carefully distinguishing LINGer interests from LANGer interests was not particularly pressing. The psychologically and philologically inclined could happily live side by side pursuing different but (often enough) closely related projects. What LANGers understood to be facts about language(s), LINGers interpreted as facts about GLs and/or FL/UG.
MP adversely affects this pleasant commensalism. The strains that MP exerts on this happy LING/LANG co-existence is one reason, I believe, why so many GGers have taken a dislike to MP. Let me explain what I mean by discussing what the MP research question is. For that I will need a little bit of a running start.
Prior to MP, LING addressed two questions based on two evident rationally uncontestable facts (and, from what I can tell, these facts have not been contested). The first fact is that a native speaker’s capacities cover an unbounded domain of linguistic objects (phrases, sentences etc.). Following Chomsky (1964) we can dub this fact “Linguistic Creativity” (LC). dI’ve already adverted to the second fact: any child can acquire any GL as easily as any other. Let’s dub this fact “Linguistic Promiscuity” (LP). Part of a LINGers account for LC postulates that native speakers have internalized a GL. GLs consist of generative procedures (recursive rules) that allow for the creation of unboundedly complex linguistic expressions (which partly explains how a native speaker effortlessly deals with the novel linguistic objects s/he regularly produces and encounters).
LINGers account for the second fact, LP, in terms of the UG features of FL. This too is a partial account. UG delineates the limits of a possible GL. Among the possible GLs, the child builds an actual one in response to the linguistic data it encounters and that it takes in (i.e the Primary Linguistic Data (PLD)).
So two facts, defining two questions and two kinds of theories, one delimiting the range of possible linguistic expressions for a given language (viz. GLs) and the other delimiting the range of possible GLs (viz. FL/UG). As should be evident, as a practical matter, in addressing LP it is useful to have to hand candidate generative procedures of specific GLs. Let me emphasize this: though it is morally certain that humans come equipped with a FL and build GLs it is an empirical question what properties these GLs have and what the fine structure of FL/UG is. In other words, that there is an FL/UG and that it yields GLs is not really open for rational debate. What is open for a lot of discussion and is a very hard question is exactly what features these mental objects have. Over the last 60 years GG has made considerable progress in discovering the properties of particular GLs and has reasonable outlines of the overall architecture of FL/UG. At least this is what LINGers believe, I among them. And just as the success in outlining (some) of the core features of particular Gs laid the ground for discovering non-trivial features of FL/UG, so the success in liming (some of) the basic characteristics of FL/UG has prepared the ground for yet one more question: why do we have the FL/UG that we have and not some other? This is the MP question. It is a question about possible FL/UGs.
There are several things worth noting about this question. First, the target of explanation is FL/UG and the principles that describe it. Thus, MP only makes sense qua program of inquiry if we assume that we know some things about FL/UG. If nothing is known, then the question is premature. In fact, even if something is known, it might be premature. I return to this anon.
Second, the MP question is specifically about the structure of FL/UG. Thus, unlike earlier work where discussions of languistic interest can be used to obliquely address LC and LP, the MP question only makes sense from a LING perspective. It is asking about possible FL/UGs and this requires taking a mentalistic stance. Discussing languages and their various properties had better bottom out in some claim about FL/UG’s limits if it is to be of MP relevance. This means that the kind of research MP fosters will often have a different focus from that which has come before. This will lead LANGers and LINGers to a more obvious parting of the investigative ways. In fact, given that MP takes as more or less given what linguists and languists have heretofore investigated as basic, MP is not really an alternative to earlier theory. More specifically, MP can’t be an alternative to GB because, at least initially, MP is a consumer of GB results. What does this mean?
An analogy might help. Think of the relationship between thermodynamics and statistical mechanics. The laws of thermodynamics are grist for the stats mechanics mill, the aim being to derive the thermodynamic generalizations in a more principled atomic theory of mechanics. The right way to think of MP and early theory is in the same way. Take (e.g.) GB principles and see if they can be derived in a more principled way. That’s one way of understanding the MP program, and I will elaborate this perspective in what follows. Note, if this is right, then just as many thermodynamical accounts of, say, gas behavior will be preserved in a reasonable statistical mechanics, so too many GB accounts will be preserved in a decent MP theory of FL. The relation between GB and MP is not that between a true theory and a false one, but a descriptive theory (what physicists call an “effective” theory) and a more fundamental one.
If this is right, then GB (or whatever FL/UG is presupposed) accounts will mostly be preserved in MP reconstructions. And this is a very good thing! Indeed, this is precisely what we expect in science; results of past investigations are preserved in later ones with earlier work preparing the ground for deeper questions. Why are they preserved? Because they are roughly correct and thus not mimicking these results (at least approximately) is excellent indication that the subsuming proposal is off on the wrong track. Thus, a sign that the more fundamental proposal is worth taking seriously is that it recapitulates earlier results and thus a reasonable initial goal of inquiry is to explicitly aim to redo what has been done before (hopefully, in a more principled fashion).
If this is correct, it should be evident why many might dismiss MP inquiry. First, it takes as true what many will think contentious and tries to derive it. Second, it doesn’t aim to do much more than derive “what we already know” and so does not appear to add much to our basic knowledge, except, perhaps, a long labored (formally involved) deduction of a long recognized fact.
Speaking personally, my own work takes GB as a roughly correct description of FL/UG. Many who work on refining UGish generalizations will consider this tendentious. So be it. Let it be stipulated that at any time in any inquiry things are more complicated than they are taken to be. It is also always possible that we (viz. GB) got things entirely wrong. The question is not whether this is an option. Of course it is. The question is how seriously we should take this truism.
So, MP starts from the assumption that we have a fairly accurate picture of some of the central features of FL and considers it fruitful to inquire as to why we have found these features. In other words, MP assumes that time is ripe to ask more fundamental questions because we have reasonable answers to less fundamental questions. If you don’t believe this then MP inquiry is not wrong but footling.
Many who are disappointed in MP don’t actually ask if MP has failed on its own terms, given its own assumptions. Rather it challenges the assumptions. It takes MP to be not so much false as premature. It takes issue with the idea that we know enough about FL/UG to even ask the MP question. I believe that these objections are misplaced. In other words, I will assume that GBish descriptions of FL/UG are adequate enough (i.e. are right enough) to start asking the MP question. If you don’t buy this, MP will not be to your taste and you might be tempted to judge its success in terms of your interests rather than its own questions.
 There are few more misleading terms in the field than “grammaticality judgment.” The “raw” data are better termed “acceptability” judgments. Native speakers can reliably rank linguistic objects with regard to relative acceptability (sometimes under an interpretation). These acceptability judgments are, in turn, partial reflections of grammatical competence. This is the official LING view. LANGers need not be as fussy, though they too must distinguish data reflecting judgments in reflective equilibrium from more haphazard reactions. The reason that LANGers differ from LINGers in this regard reflects their different views on what they are studying. I leave it to the reader to run the logic for him/herself.
 The term set should not be taken too seriously. There is little reason to think that languages are sets with clear in/out conditions or that objects that GLs generate are usefully thought of as demarcating the boundaries of a language. In fact, LINGers don’t assume that the notion of a language is clear or well conceived. What LINGers do assume is that native speakers have a sense of what kinds of objects their native capacities extend to and that this is an open ended (effectively infinite) capacity and that is (indirectly) manifest in their linguistic behavior (production and understanding) of linguistic objects.
 Here’s Chomsky’s description of this fact in his (1964:7):
…a mature native speaker can produce a new sentence of his language on the appropriate occasion, and other speakers can understand it immediately, though it is equally new to them. Most of our linguistic experience, both as speakers and hearers, is with new sentences; once we have mastered a language, the class of sentences with which we can operate fluently is so vast that for all practical purposes (and, obviously, for all theoretical purposes), we may regard it as infinite.
 Personally, I am a big fan of GB and what it has wrought. But MP style investigations need not take GB as the starting point for minimalist investigations. Any conception of FL/UG will do (e.g. HPSG, RG, LFG etc.). In my opinion, the purported differences among these “frameworks” (something that this edited collection highlights) have been overhyped. To my eye, they say more or less the same things, identify more or less the same limiting conditions and do so in more or less the same ways. In other words, these differing frameworks are largely notational variants of one another, a point that Stabler 2010) makes as well.
It's interesting that you use the thermodynamics metaphor to describe the relation between MP and GB. Remko Scha, one of the pioneers of probabilistic grammars, used it often to describe the relation between the level of description of the theoretical linguist and the messy, stochastic cognitive and neural processes underlying it. Things like temperature and pressure are very real (and different from each other) at the macroscopic level, but disappear when you descend to the microscopic level (where there's just movement of molecules). Similarly, categories, rules, grammaticality, are very real at the linguistic level, but disappear when you zoom in further.ReplyDelete
It is in this sense only that I can understand an expression like "ontologically more basic": moving molecules are more basic than temperature. I therefore don't really see how we can understand the difference in world views between LINGers and LANGers in degrees of 'ontological basicness' of various linguistic concepts. I'd think it wouldn't be difficult to agree on the ontological status of the set of Greenbergian vs the set of Chomskyan universals -- the disagreement would be about whether or not these sets are empty and what they contain. And, importantly, about how we find out, but that would be a discussion about what is *epistemologically* more basic.
Less a metaphor and more an analogy. But like all analogies it has to be handled carefully. Where I think it fits is that thermodynamics is a phenomenological theory. It captures generalizations that a more fundamental theory aims to explain. This is the way I think of the relation between GB and MP: the former sets the generalizations that the latter should derive in some principled manner. So, I don't really see the analogy the way Scha and you do. I think there are rules and (most likely) categories and the Grammar being a real object some conception of grammaticality makes sense. On the other hand notions like binding domain or c-command or controller are, at best, descriptive terms, not basic terms of art.Delete
By ontologically more basic I mean that LINGers take Gs as more fundamental. Sentences have the features they have because they are objects generated by Gs with certain properties. G properties are less invariant, less context sensitive and etiologically more fundamental. Sentences are complex objects whose properties are the result of the interaction of many different sub-systems only one of which is the grammar. The problem, IMO, with Greenberg generalizations is that they are summaries of surface forms that language has and are likely to be tracking non-natural properties. Greenberg Universals are summaries of what we have seen. Chomsky universals are principles that determine what properties a G can have whether it is seen or not.
I don't really think that the notion of epistemologically more basic makes much sense here. We use data (among other things) to divine the abstract properties of the underlying principles. What you see before your eyes is, perhaps, more epistemologically accessible. But if history is any guide, this kind of data is most often misleading. I think the same holds in linguistics. What you can "see" is likely to be misleading. It's the only way to go, but as things progress we start manufacturing the data so that it is refined enough to address the theoretical questions that deal with the fundamental questions. In sum, I follow Fodor here in warning against confusing ontological questions with epistemological ones. The latter are, at least as concerns the basics, not enlightening.
I think it's more interesting than "Thermodynamics is a phenomenological theory, statistical mechanics is an explanatory theory". At some level, every theory is phenomenological. Underlying a theory that postulates molecules with positions in time, there is a more fundamental theory that uses atoms, quarks, waves, or what not, where the basic particles have no definite position anymore. So, the most interesting thing about the analogy is that it makes us aware that the basic building blocks we need to assume at one level of description may not be clearly delineated objects at a lower level.Delete
I think this is a point where I have often disagreed with your analysis: when you, e.g., call on neuroscientists to look for the stack when trying to understand how the brain processes grammar, you underestimate, I think, how difficult it might be to recognize whatever it is at the neural level that might look more or less like a stack memory at a linguistic level of description.
On the other points, I think we largely agree. Yes grammars (whatever they are) are more basic than sentences. I just wondered whether this is really the point where the worldviews of LINGers and LANGers diverge.
I also don't disagree with what you write about epistemology (although, as a rule of thumb, I prefer going in the reverse direction from Fodor :)). The reason I brought up epistemology is that I see disagreements about what counts as evidence, and what you do when the evidence is inconclusive, as the issues that define the major camps in linguistics. At some point, in every serious discussion, all sides agree that there still are great mysteries about how language is used, learned, and how it evolved... and then they fall back on some basic assumptions that they hold to be selfevidently true.
Well, every theory but the last will not be fundamental. But I think that some theories do not pretend to be fundamental, and thermodynamics was one of these, I think. It established relations among real magnitudes without trying to explain why/how they held. But, I am no expert in these matters, so you may be right.Delete
I also agree that one interesting thing about the analogy is that magnitudes etc that the less fundamental level takes as given will be less clear cut at the more fundamental level. The way reduction/explanation works is usually by restricting the ontology and unifying what looks disparate. Lumping is the name of the game. Hence it is likely that the more fundamental levels will cut things up differently in order to unify them. I think that this will be so in linguistics as well, at least if my conception of minimalism turns out to be roughly on the right track.
I also agree that finding stacks will not be easy. But it might not be impossible either if you are looking for one. Geneticists found the code that DNA embodies by looking for it. It was hard, but it is being done. I think that one of the more interesting criticisms of current near by Gallistel is that if you are looking for things like addresses, and read/write memory and variables then you should NOT look at connections because you won't/can't find it there. This is an interesting argument. So if we have evidence for these kinds of things (and we do) then this implies that physical models that cannot accommodate them are likely wrong. Ok, so they are. the problem is not so much that they are hard to find (though they probably are) but they are impossible to find if you don't look for them.
Last point: of course you fall back on things you take to be self-evidently true, or at least true enough to hold fixed. What else can you do? What I think is odd is that linguists do not really believe (or many don't) that we have made progress over the last 60 years. I find this incredible. Why do I say this? Because many are unwilling to take anything for granted. And for me this is terrible, because I think that doing Minimalism REQUIRES holding the results of the last 60 years as more or less right. Don't do this and you can do nothing at all. So, the problem is a failure of nerve. We need more guts!
I like your remark in Footnote 4: "Any conception of FL/UG will do (e.g. HPSG, RG, LFG etc.). In my opinion, the purported differences among these “frameworks” (something that this edited collection highlights) have been overhyped. To my eye, they say more or less the same things, identify more or less the same limiting conditions and do so in more or less the same ways. In other words, these differing frameworks are largely notational variants of one another, a point that Stabler 2010) makes as well."
I agree on this and I showed in "Unifying Everything: Some Remarks on Simpler Syntax, Construction Grammar, Minimalism and HPSG" Language 89(4), 920–950. https://hpsg.hu-berlin.de/~stefan/Pub/unifying-everything.html that some of the schemata of HPSG correspond to Internal and External Merge. I checked Stabler's papers, but could not find any remark regarding notational variants. There is just one paper by him from 2010 that contains no remarks on notational variants.
While I agree that some aspects are directly translatable into other frameworks there are other aspects that are not translatable at all. While theories formulated within the MP emphasize derivations other theories emphasize the fact that they are representational. We know for some time now that transformations are not psycholinguistically real and usually people say that transformations are metaphors for something. But then you state a theory in metaphors that do not have any connection to reality and after all they are falsified by everything we know about processing of natural languages. We do not build complete phrases/phases and ship them off to the interfaces. Language processing is incremental and immediate. Since theories developed within the MP make wrong claims about these "language facts" it cannot be a proper theory of "linguistic facts" (language/linguistics your terminology, see below). So here we have real differences. I am happy with many papers coming from the Minimalist community as long as they do not make any claims about processing and phases. If no such claims are made the results and analyses can be translated to other frameworks. If you are in "phases" and "shipping" you are alone. In the dark. Sorry.
A further comment on progress and keeping old insights: I absolutely do not get the whole discussion about labeling. People (Donati, Chomsky, Ott, Citko) say they can do free relative clauses that way. But it does not work for cases with pied piping and it does not get non-matching free relative clauses right. This is data from 1987-1981 and there have been solutions to the problems in GB. Labeling is also discussed in my paper above.
I've just come across this interesting discussion.Delete
Edward Stabler talked about "notational variants" in his 2011 paper entitled "Meta-meta-linguistics" (Theoretical Linguistics 37-1/2: 69-78), p.5 in the preprint / p.76 in the revised, final version.
Maybe that's the paper Norbert Hornstein had in mind...
You talk about the progress and that the MP is about FL/UG. You claim that UG is probably your version of GB (or HPSG, RG, LFG, etc.). But UG was seen as something that is genetically determined. So what is in UG? You would not want to claim that your whole GB theory is contained in UG? So what is it? Parameters? Name some. Features? Principles? Syntactic categories? If you talk to biologists they will tell you that genes are not as specific as this. There is no gene for bounding nodes in the Subjacency Principle.ReplyDelete
You write: "A fundamental aim of linguistic theory is to explain how this is possible by describing the fine structure of the meta-capacity (i.e. by outlining a detailed description of FL’s UG properties)."
But here is a fundamental problem: How do you arrive at knowledge about this meta-capacity? You are speculating on how languages could be and set up a system on the basis of what you currently know. This system may exclude some languages you did not think of. The alternative view is to start with languages and generalize from there. You will arrive at something that is really general and probably this will be: Languages combine linguistic items (Merge). This is maybe a bit disappointing but in the end you learned something on the way. The general methodology I have in mind is described in my paper on the CoreGram project. https://hpsg.hu-berlin.de/~stefan/Pub/coregram.html
You write: "from what I can tell, these facts have not been contested). The first fact is that a native speaker’s capacities cover an unbounded domain of linguistic objects (phrases, sentences etc.)."
It has been contested. You may check my Grammar theory textbook which gives you an overview of the infinitude claim discussion and pointers to the literature (Pullum, Postal, others).
"GLs consist of generative procedures" This is your assumption. Not shared by everybody.
"In other words, that there is an FL/UG and that it yields GLs is not really open for rational debate". True, but whether it is language specific is.
"Over the last 60 years GG has made considerable progress in discovering the properties of particular GLs and has reasonable outlines of the overall architecture of FL/UG." When I asked the question "what is in UG = genetically determined linguistic capacity" I did not get an answer. Chomsky says it is Merge = we combine things and I am happy with this, although even this may be domain general. In any case Merge is not enough to learn the very complicated Minimalist analyses (covert movement, feature checking in specifier positions, ...). So we have a poverty of the stimulus problem here, which means that either the analyses are wrong (not learnable) or you have to assume a rich UG (impossible/implausible from what we know about biology).
"Thus, MP only makes sense qua program of inquiry if we assume that we know some things about FL/UG. If nothing is known, then the question is premature. In fact, even if something is known, it might be premature."
Yes. Exactly. You have to tell us what UG is. I guess you will tell us in the rest of the article.
You write: "If this is correct, it should be evident why many might dismiss MP inquiry. First, it takes as true what many will think contentious and tries to derive it. Second, it doesn’t aim to do much more than derive “what we already know” and so does not appear to add much to our basic knowledge, except, perhaps, a long labored (formally involved) deduction of a long recognized fact."
As I indicated above: This is a perception one can have. And it is even worse. The labeling analyses fall behind what we already knew. I just do not get the point of this. Categorial Grammar can determine the label of a combination for quite some time now (1935). Furthermore, if you look at Chomsky's papers you will see that the concept of labeling was not worked out in detail in the first paper and isn't worked out fully in the second paper either. This is highly frustrating given the formal standards he established in the 50ies.ReplyDelete
"If you don’t believe this then MP inquiry is not wrong but footling." The theories coached in the MP framework can be wrong in addition to the program being footling. If they are good theories they make predictions (Chomsky, 1957) and we can judge them with respect to their predictions.
"There is little reason to think that languages are sets with clear in/out conditions or that objects that GLs generate are usefully thought of as demarcating the boundaries of a language. In fact, LINGers don’t assume that the notion of a language is clear or well conceived." What is a generative grammar then? Isn't a generative grammar something that enumerates a set?
Finally, you claim the term "linguist" for people working within the MP and call all others dealing with language "languist". This may be perceived as arrogant by many. As an editor of a volume that wants people of different frameworks and schools of thought to collaborate, I would not tolerate a paper that insults basically all other contributors to the volume.
I am looking forward to the full paper.
I agree with Stefan about several things, but especially that the term "languist" is counterproductive. It smacks of pejorative. Linguistics has got to be a big tent. Tribes can coexist in it, and a tribe can't try to grab the venerable mantle of "linguist" all for itself, that's going to be seen as trying to shove everybody else out of the tent.Delete
I will address the substantive comments Stefan makes when I have some time to think about them, but I would like to address the Ling/lang issue now. I don't use LING for people working within MP and LANG for those that do not. The distinction deals not with MP but with the object of inquiry. LANGers take the object of inquiry to be language, LINGers the faculty thereof. I am happy to change terminology if LANGers really want the LING prefix. Fine with me. How about philologists and cognitivists or language scientists and biolinguists. I don't care about the terminology. I do care about the different subject matters.Delete
Why do I care? Because I believe that part of the evident hostility against MP lies in finally having to face this divide. And no I DO NOT want a big tent. In fact, I have no idea what having a big tent is supposed to mean. This is not politics. This is science and in doing science it helps to know what questions you are addressing and LINGers and LANGers are ultimately addressing different questions, which is what MP finally forces to the surface.
Now we can ask whether one question inherently better than another. No. It's a matter of interests and tastes. I prefer the cognitive perspective myself, but if you don't, well that's fine with me. Why should we pretend that we are all doing the same thing when we aren't?
Now note, not of this means that LINGers and LANGers can't talk to one another, nor does it mean that one can't be interested in both issues. But I do think we should be clear about what we are doing. The kumbaya spirit Peter and Stefan are arguing for just muddies the intellectual waters and serves nobody well.
Maybe it is arrogant to want one's questions clearly demarcated and to eschew the big tent view where all distinctions are muddied for collegiality. If so, call me arrogant. I don't want to collaborate with just anyone. I want to collaborate with people interested in the same questions I am interested in. Why this insane insistence that people working on different projects play together? What's the value of that? Scientists sharpen issues, they do not muddy them.
Last point: I have no problem with people not being interested in my questions. Why then when I point out that there are different questions, that their differences should be clarified and not muddied do people find this to be INSULTING? Are we so insecure that only ignoring differences makes us confident in our research questions?
So, label the difference as you will. The projects are different and recognizing this is the only way to make headway on the one that I find interesting. So, no apologies forthcoming.
A couple of remarksDelete
1) I don't know what exact passage Norbert is attributing to Ed either. In fact, I don't think Ed would approve of HPSG and LFG being lumped in with some other formalisms because there are significant differences in generative capacity and, more importantly, no good understanding of whether the substantive restrictions posited in those formalisms prune down their power. The most informative weak equivalence result is that between TAG, CCG, and Linear Indexed Grammar because the translation procedures do reveal how the different mechanisms in those formalisms map to each other. But Kuhlmann, Koller & Satta show in a recent paper that this equivalence breaks down for linguistically more faithful versions of CCG. So from a formal perspective this equivalence claim only holds in the rough sense that most formalisms are roughly in the same computational ballpark (with the exception of HPSG and LFG) and that we don't have good empirical data to tease them apart. The parts they have in common seem to cover syntax fairly well, and the parts that are problematic are problematic for everybody.
2) From a purely linguistic perspective, the picture isn't all that different. There's a lot of common ground, but the interesting part is where the formalisms diverge. TAGs, for example, give a straight-forward explanation of island effects that are difficult to emulate in other formalisms. Similar ideas can be invoked for Minimalism, see e.g. an old CLS paper of mine. But if you contrast my paper with, say, Bob Frank's discussion of island effects in TAG, you'll see that they run into very different issues, and both differ from how phases handle things. Again there is a common core but lots of divergence around the edges, which is where things actually get interesting.
3) @Norbert: I think you're only considering the scientific side of the tribalism debate here, but in the real world the institutional factors are more important. And I'd say history shows that a big tent is needed if linguistics wants to have any chance to compete for resources with the big fish. Unity is strength, the more fractured the field the more marginalized it will be.
4) @Stefan: While I find a lot to agree with in your post, a few specific points do not jive with how I see things.Delete
- A derivational approach can still be representational because the set of well-formed derivations can be specified in a constraint-based, representational manner. Now maybe you would not consider that derivational anymore, but I think it shows that the whole representations VS derivations dichotomy is a red herring.
- I'm not sure what you mean by the claim that transformations are psycholinguistically implausible. There are top-down parsers for Minimalist grammars, and they actually do very well in predicting processing behavior. See this paper by Kobele, Hale, Gerth, Gerth's thesis, and this paper I wrote with some of my students. We also have results on the processing of stacked relative clauses, attachment ambiguities, and scope preferences.
- Maybe answering my previous question, is your beef with the bottom-up structure building enforced by phases? If so, then I think it's important to distinguish between a formalism and the interpretation that is attached to it. A phase is just a locality domain. The whole spell-out story that's woven around it is one particular interpetation, but it's not part of the formalism. Just like in physics you can have many different interpretations of the same mathematically worked-out theory.
- The labeling debate is about deliberately handicapping the formalism (supposedly motivated by conceptual simplicity considerations) to derive the necessity for movement and explain some other things. So saying that labeling is a technically simple problem that has been solved for almost a century kind of misses the point. But yeah, I'm not a fan either.
On 1) I think that the computational power of a linguistic formalism is irrelevant for our discussions. It is the theory formulated within the formalism that has to be restrictive. I discuss this in my Grammar Theory book: http://langsci-press.org/catalog/book/25
On 2) "Again there is a common core but lots of divergence around the edges, which is where things actually get interesting." I fully agree on this!
On 3) I have heard stories about grant applications (for Collaborative Research Centers) where people of other sciences were really surprised about how we linguists deal with each other in (public) reviewing. In the end nobody gets financed and the other sciences get the money.
"4) A derivational approach can still be representational because the set of well-formed derivations can be specified in a constraint-based, representational manner. Now maybe you would not consider that derivational anymore, but I think it shows that the whole representations VS derivations dichotomy is a red herring."
This is true to some extend. I know of the representational versions of GB and the constraint-based formalizations. In fact, lots of GB stuff made its way into HPSG, which is declarative and constraint-based. Head-movement for V1 and V2 in German, the passive analysis of Haider, case theory. It is all there.
There is a crucial difference between a transformational approach to verb movement and the HPSG version of it: There is no separate Deep Structure. This is important when it comes to pairing the theory with performance models.
"I'm not sure what you mean by the claim that transformations are psycholinguistically implausible. There are top-down parsers for Minimalist grammars, and they actually do very well in predicting processing behavior. See this paper by Kobele, Hale, Gerth, Gerth's thesis, and this paper I wrote with some of my students. We also have results on the processing of stacked relative clauses, attachment ambiguities, and scope preferences."
Minimalist Grammars are nice, but they are not what Chomsky suggests. You are not building phases and ship them off somewhere. What Minimal Grammars do is similar to the syntactic parts of Categorial Grammar and HPSG. At least the versions I looked at. There is a discussion of this in the Unifying Everything paper: https://hpsg.hu-berlin.de/~stefan/Pub/unifying-everything.html
Note that it is not sufficient to have a top down parser. Language processing is neither exclusively top-down nor exclusively bottom-up. It is incremental and sometimes even middle out (if the signal was missing or unclear). See Pullum's work on Modell Theoretic Syntax where he discussed the possibility to assign structure to utterance fragments. There is a discussion of this in my Grammar Theory text book. http://langsci-press.org/catalog/book/25
So if you parse:
I think that he is a liar.
You start building structure after hearing "I", you integrate "think" and when you hear "that" you interpret this. If your architecture says, you build complete phases (NP, CP, VP, vP or whatever the respective author chooses) and send them off to the interfaces, you make wrong predictions. The "interfaces" are active/available all the time. Even within words you have hypotheses what the word will be. Eye tracking studies show this.
People have tried to reconcile Minimalist theories with psycholinguistics (e.g. Phillip, 2003), but these approaches violate some of Chomsky's constraints (The No Tampering Condition), so they are incompatible with the competence theory.
"Maybe answering my previous question, is your beef with the bottom-up structure building enforced by phases? If so, then I think it's important to distinguish between a formalism and the interpretation that is attached to it. A phase is just a locality domain. The whole spell-out story that's woven around it is one particular interpetation, but it's not part of the formalism. Just like in physics you can have many different interpretations of the same mathematically worked-out theory."Delete
Well, this does not work as a claim. I guess the story was first and you provided a formalization of it. Your formalization ignores parts of the story. (The ones that I would ignore too) But this amounts to saying: The story is not important. I cannot believe this. It is a very central point in much of the recent literature. There are books about the relations to the interfaces, handbook articles about Minimalism explain the new architecture and how it differs from GB and so on.
You could claim that the Minimalist theories are just ways to characterize our linguistic knowledge and that we use knowledge in a different, compiled form for parsing and production. Frazier & Clifton (1996: 27) suggested something like this. But why should one formulate ones theories that way? What is the evidence that a representation is a correct one if it has to be transferred into another representation to be able to be combined with performance models. Shouldn't we write our theories down in a way that can be directly paired with performance models? In the former setting you have a primary and a derived theory and in the second setting you have just one and results from research on performance directly feed back to your theory.
"The labeling debate is about deliberately handicapping the formalism (supposedly motivated by conceptual simplicity considerations) to derive the necessity for movement and explain some other things. So saying that labeling is a technically simple problem that has been solved for almost a century kind of misses the point. But yeah, I'm not a fan either."
Well you are a formally trained guy. You know that the passage about coordination and necessity for movement does not work. The whole stuff is complicated as hell and not worked out in detail. So I do not buy the "conceptual simplicity" argument. Compare it with: The functor determines the label. Done. I teach this (Categorial Grammar) to German teachers. They love it.
And in the end: Where is the knowledge about all this labelling supposed to come from? How is this acquired? This is a nice self-constructed poverty of the stimulus problem. Again you could say: This is just a characterization of our knowledge. But why characterize it this way?
While I agree with you for the most part, there are two claims you make in your response to Thomas' (4) which I would like to address:
1. Minimalist grammars and incremental parsing and interpretation
2. Minimalist grammars are not minimalism
I'll begin by addressing point 1. Minimalist grammars have a directly compositional semantics, which is (by definition?) variable free, and requires no ad hoc basic types (i.e. e and t are fine, without any coding tricks). As Thomas noted, they also have incremental, correct, and efficient parsing algorithms which, given the compositional semantics, can directly parse strings into meanings, on a word-by-word basis. Parsing can also work 'from the middle out', as the parsing algorithm is based on the Bar-Hillel et al intersection with a regular string language construction; starting at the 'middle' is just intersecting with what you hear prefixed by \Sigma*.
As for point 2, this is harder to argue in either direction. I think there are two points at issue here, one is that this has something of the flavor of a 'no true scotsman' argument, and the other is that MGs really are a good formalization of the entire minimalist program, they just often make different notational decisions, and abstract away from implementational details when they do not seem to matter. Most importantly, in formalizing and understanding the properties of the formalism, we have come to the realization that the derivation tree, not any derived structure, is the structure we should be talking about. This makes theory-internal issues like labeling, no tampering, extension, move as merge, etc sort of moot, which is why we have not 'formalized' them; they either follow trivially, are meaningless, or are about something completely different than what they seem. For example, phases (qua spell-out domains) don't appear in the MG formalisation because, in the process of actually formalizing things, we discovered that they are unnecessary; phases are an attempt to solve an interface problem which does not exist. (This is a coarse description that I would back off from somewhat in a less constrained format.)
Agreement (and rich theories of the syntax-morphology interface) have typically been ignored by MGers because we have proven results that entail that pretty much any such theory can be grafted onto MGs without changing their fundamental properties. This is just a case of ignoring implementational details because they don't matter for the questions we are interested in asking. A student of mine, Marina Ermolaeva, has been pushing on this, and will be presenting some of her work in this regard at Formal Grammar.
I believe that Norbert's linguists often do have interesting insights; however, they often frame these insights with respect to theoretical issues that are not the most conducive to communicating them.
Nice to talk to you!
"phases are an attempt to solve an interface problem which does not exist."
This is interesting. If it does not exist and if it disappears in your formalization why isn't everybody using Minimalist Grammars to solve problems? Another question I have is: Why do people argue about what actually counts as a phase? This seems to make predictions.
By the way: I love MG. It was Stabler's paper that finally helped me understand Chomsky's paper since it was a formalization and very clearly presented.
"Agreement (and rich theories of the syntax-morphology interface) have typically been ignored by MGers because we have proven results that entail that pretty much any such theory can be grafted onto MGs without changing their fundamental properties. This is just a case of ignoring implementational details because they don't matter for the questions we are interested in asking."
But the claim is that probes and goals are real, we are talking about the brain. Now, you are claiming, they are not needed since you have proven you can do it differently.
I think it is not acceptable to claim that Minimalist ideas can be compiled down into a Categorial Grammar or a Construction Grammar like set of templates, since the Minimalist proposals come with clear conceptions about when things happen. If you change this, you have a different theory with different predictions. I guess your view is closer to mine than to core Minimalism.
"I believe that Norbert's linguists often do have interesting insights; however, they often frame these insights with respect to theoretical issues that are not the most conducive to communicating them."
@Stefan: Like my first reply, these comments are tangential to the main thrust of your initial post.Delete
1) It is the theory formulated within the formalism that has to be restrictive.
Yes, that's the standard HPSG/LFG reply, together with the claim that you can always reaxiomatize an unrestricted system into a more restricted one once you have a better idea of the target class. Both are correct in principle, but not in practice. The problem with the notion of the restrictiveness of a theory is that it's much harder to tell whether a theory is restrictive rather than a formalism. Here's a concrete example: every formalism that has a subcategorization mechanism that requires exact matching ("I select a DP, and only a DP") can incorporate arbitrary MSO-definable constraints by prebaking them into the category system. This can open up giant loop holes in theories of locality, renders island constraints moot, destroys morphosyntactic restrictions on agreement, and so on. With MGs and CCGs, I know at least that I still won't be able go beyond the patterns that the formalism can produce, so some generalizations remain. Whereas with a completely unrestricted formalism it is unclear how much damage these loop holes are doing because your theory-specific assupmtions might or might soften the blow. A restrictive formalism is not the final goal, but it is an important first safeguard and safe starting point.
4) When I was talking about representational specifications of derivational theories, I wasn't talking about GB but about Minimalism. My favored specification of MGs treats them as sets of well-formed derivation trees, which can be specified with constraints, tree automata, or whatever else you want.
There is a crucial difference between a transformational approach to verb movement and the HPSG version of it: There is no separate Deep Structure. This is important when it comes to pairing the theory with performance models.
As Greg has pointed out, this isn't true. The representational view of MGs allows you to view them as CFGs with a slightly different yield function, and MG parsing becomes CFG parsing where the parser takes this altered yield function into account. That derivation trees are Deep Structures is immaterial.
Minimalist Grammars are nice, but they are not what Chomsky suggests.
Yes, but most of Minimalist syntax isn't what Chomsky suggests. For instance, hardly anybody bought the Beyond Explanatory Adequacy idea that phases have no internal derivational ordering and everything happens at the same time. If you look at the field at large, the specific notion of phases as chunked spell-out inspires a certain style of analysis, but it is not a necessary component of the proposals that come out of that. Phases as chunked spell-out is like the probability wave interpretation of quantum mechanics: it privileges certain intuitions that give rise to new proposals, but it isn't a fundamental component of the theory as far as empirical matters are concerned.
Take the case of morphosyntax, which imho has been the most active research area in Minimalism for almost a decade now. First of all, this subfield has a very different picture of how syntax works compared to Chomksy's (see e.g. Omer's posts on this blog). And my impression from talking to some of the researchers in this area (a small sample, admittedly) is that at the end it doesn't matter to much to them whether phases are this thing that gets shipped to interfaces incrementally, and how exactly the Agree operation is implemented in syntax. If there's another interpretation that still gets them what they need for their analyses, everything's nice and dandy. Their primary goal is to characterize the system of morphosyntactic agreement rather than the nature of phases or the Agree operation --- that's just a means to an end. So an argument against a specific view of phases, or any other formal construct, is not an argument against Minimalism.
See Pullum's work on Modell Theoretic Syntax where he discussed the possibility to assign structure to utterance fragments.Delete
As somebody who has done a lot of work in MTS, I find Pullum's work in that area very unconvincing. The linguistic advantages he attributes to MTS, from graded acceptability to error-correcting parsing, can be easily ported over to the standard transformational approaches.
You could claim that the Minimalist theories are just ways to characterize our linguistic knowledge and that we use knowledge in a different, compiled form for parsing and production.
Yes, and that's pretty much inevitable and shouldn't be all that controversial. A parser pays attention to many factors that do not matter for the idealized competence view, e.g. lexical frequency. Some representations are more useful than others for this purpose. Instead of a left-corner parser for CFGs, it is sometimes nicer to use an equivalent recursive descent parser of the left-corner transform of the same grammar. We do not even know that humans build individual tree representations rather than a forest of the n best analyses, yet I would not want syntactic theory to posit forests instead of trees as the structural primitive because it makes things more complicated with no pay-off for syntax.
Shouldn't we write our theories down in a way that can be directly paired with performance models?
First of all, "directly paired" is a very vague term here since what I consider direct may be obtuse to somebody else. But in general, the answer is no. We should factorize our theories to get the most succinct descriptions. As long as I can mechanically translate between representations A, B, C, I'm gonna use the representation that is the best fit for the task at hand, rather than trying to lump A, B, C together into some much more complicated D. But putting aside general methodology, I'm not even sure how your remarks apply to the work I linked to.
This is interesting. If it does not exist and if it disappears in your formalization why isn't everybody using Minimalist Grammars to solve problems?
Many reasons, but one is exactly why there is a labeling debate: the system is assumed to be limited in some respect because that is taken to explain some piece of data. Most people don't buy the conceptual arguments for phases (the problems were immediately pointed out after the Derivation by Phses manuscript landed in 99), but phases as locality domains were perceived as fixing some serious empirical problems with early Minimalism and that's why they stuck around.
Another question I have is: Why do people argue about what actually counts as a phase? This seems to make predictions.
Greg is talking about phases as an architectural feature, i.e. the chunked spell-out thing and the conceptual reasons that have been given for such a system, e.g. memory load. The question what phrases are phases is a purely empirical one and could just as well be reformulated as what phrases are locality domains, what phrases are Barriers, what maximal projections may interact with movement paths, and so on.
But the claim is that probes and goals are real, we are talking about the brain. Now, you are claiming, they are not needed since you have proven you can do it differently.Delete
Neither statement contradicts the other. If you analyze, say, a CKY parser, you will notice that it incorporates the algebraic structure of a semiring. Now is that semiring real? Yes and no, there is something in the algorithm that can be analyzed as a semiring, and that is a useful idea because it allows you to unify many different parsers (probabilistic, Viterbi, and so on). But the algorithm wasn't specified as a semiring, so in some sense there is no semiring. Similarly, probes and goals can be real in that they describe a cognitively real system of agreement dependencies, but I can take system and encode it in a very different way, e.g. with MSO constraints, tree automata, subcategorization, and so on. That doesn't make probes and goals any more or less cognitively real.
I'll be the first to admit that this is not how Minimalists talk about it. The program has a very strong ontological commitment that imho goes beyond cognitive realism into the domain of notational literalism. But in practice Minimalists do at some level recognize the split between an abstract dependency and the encoding of this dependency, that's why there is a feeling of continuity between GB and Minimalism despite the superficially large differences.
Yes, calling your "linguists" "biolinguists" would be more appropriate since this is an established label for a certain type of research. The term "linguist" should be used in its traditional way. I think Biolinguistics is a misnomer but at least this label was not used before.
As for sharp distinctions: If you want to play alone, you can do so, but this will be very difficult. If people do not understand what linguists are doing and do not see any use in their research, they will close departments and cut off funding. They will give the chairs to Media study and literature rather than to linguistics. (as a reviewer of a commentary by Richter and Sternefeld on my Grammar Theory textbook noted in 2012).
"Why do I care? Because I believe that part of the evident hostility against MP lies in finally having to face this divide. And no I DO NOT want a big tent. In fact, I have no idea what having a big tent is supposed to mean. This is not politics. This is science and in doing science it helps to know what questions you are addressing and LINGers and LANGers are ultimately addressing different questions, which is what MP finally forces to the surface."
If we are talking about money and positions, it is politics. Universities and research founders want people who cooperate. If you ignore half (or more) of your colleagues you basically cut yourself off of valuable resources. Maybe they are working on very interesting languages/phenomena that could shed light onto your issues. Trying to understand analyses in other frameworks can help you to invent something similar in your framework. Understanding why negative examples are relevant to justifying their theories may be helpful in praising yours since your theory got the negative data right in the first place.
Apart from this I think that many have goals similar to yours: Understand how language works, what is special about it, what we need in terms of cognitive abilities, how it is acquired and processed. So, many of us are into cognitive things. Where we differ is how we approach things. One way is to look at several languages and try to work out generalizations. Another way is to assume certain things and see whether they work. In order to work the MP way you have to speculate about the properties of (all [possible]) languages. But what is this speculation built on? It is built on things we already know. This is our grammar tradition coming from Latin + some additional facts we gathered on the way since 1957. So what you have to do is to get a broader basis and here you need the "languists". Or people working within the Minimalist paradigm but work out large scale grammars and compare them with grammars of other languages. This is what was done by some researchers in GB and what is still done by some "languists" working within the MP.
"Maybe it is arrogant to want one's questions clearly demarcated and to eschew the big tent view where all distinctions are muddied for collegiality. If so, call me arrogant. I don't want to collaborate with just anyone. I want to collaborate with people interested in the same questions I am interested in. Why this insane insistence that people working on different projects play together? What's the value of that? Scientists sharpen issues, they do not muddy them."
My comment was about the "linguists" vs. "languists" distinction. You wanted to have the whole tent for yourself. What about this: Let's make it a big tent. One with compartments. You sit in yours, I sit in mine, Peter sits in his compartment or maybe shares one with you. Or with me. From time to time we come out and go swimming, sing and dance at the campfire and have a große Apfelsaftschorle together. And in the evening we talk about politics or linguistics. (I remember having große Apfelsaftschorle with Peter in Norway, no campfire though)
"Last point: I have no problem with people not being interested in my questions. Why then when I point out that there are different questions, that their differences should be clarified and not muddied do people find this to be INSULTING? Are we so insecure that only ignoring differences makes us confident in our research questions?"ReplyDelete
I have no problems with you doing different things, I just pointed out how your ling/lang distinction will be perceived. As for being insecure: I am working in a minority framework for 24 years now. If I would have been insecure I would have done different things by now. And I am not ignoring differences. You may read up on this in the Language paper where I point out problems of Minimalist theories (and Construction Grammar approaches). See also the CoreGram paper that discusses lack of formalization, problems with acquisition and so on. So there are differences and some of them are huge and unbridgable (the architecture question, I will comment in another comment to Thomas on this). The Grammar Theory textbook has a chapter in Minimalism including a critical discussion as well. But still it may be worthwile to look at other stuff without claiming right away that others are absolutely different.
Actually, you pointed out how the politics and funding might play out if we don't all stick together. The big tent is not for intellectual matters but for political ones. You are asking for a coalition of the willing to beat back the forces that might deprive us of funding and opportunity.Delete
This is something I understand. I have no problem coalescing with whomever to protect our slice of the pie. However, I do object to muddying up the waters intellectually and pretend that we are really all doing the same thing. We are not, and not recognizing the differences makes it hard to engage in research and debate. One thing that is very much worth being clear about is what you the the object of inquiry to be. LANGers and LINGers differ here. I have no problem obscuring this when we address funding agencies, but I see little reasons to finesse the divide when we are talking science.
Last point: I am clearly in the minority here. You and Thomas and Peter are looking over your shoulders at the funding agencies and university politics. One of the advantages of being old and grey and out the door is that I can put this all to one side. I wish you all luck.