Comments

Tuesday, October 16, 2018

Omer on linguistic atoms

This is my first curation since agreeing to curate. I am soooo excited! The link is to a piece that Omer has on syntactic atoms. I won't be giving much away if I say that he thinks that it is not entirely clear what these are, though whatever it is, it is not what most people take them to be.  I won't say what his argument is, because you should read it.  But I will say that the main point he makes has been, in part, made by others.

Chomsky has long argued that whatever "words" are they do not have the referential properties that semanticists take them to have. And in this they contrast with animal calls, which, Chomsky points out, fit the referential/denotational paradigm quite tightly. See (here) for some discussion and references.

Similarly, more recently, Paul Pietroski has argued that syntactic atoms do not denote concepts or extensions but something more abstract; something akin to instructions for fetching concepts. As he points out the key desideratum for lexical meanings is that they compose (here Paul follows Fodor who made a very good career pointing out that most of the things that philosophers proposed as vehicles for meaning failed to compose even a little bit). If this is so, then the idea that linguistic atoms are concepts cannot be correct and the question of how our syntactic atoms emerged to mediate our way to concepts becomes an interesting question. Combine this with Chomsky's observations and one has a real research question to hand.

Omer presents another take on this general view; the idea that our standard conceptions of syntactic atoms are scientifically problematic. In fact, given that we have learned something about syntax and the basic operations it might involve over the last 60 years, just what to make of the atoms (of which we have, IMO, learned little) might be even more urgent.

Here is the link to Omer's piece.

Wednesday, October 10, 2018

Birds, all birds, and nothing but birds

I know, just when you thought it was ok to go back into the water. He’s back!! But rest assured this is a short one and I could not resist. It appears (see here) that biology has forsaken everything that our cognoscenti have taught us about evolution. We all know that it cannot be discontinuous. We all knowthat the continuity thesis is virtually conceptually necessary. We all knowthis because for years we have been told that the idea that linguistic facility in humans is based on something biologically distinctive that only humans have is as close to biologically incoherent as can be imagined. Anybody suggesting that that what we find in human languagemightbe biologically distinctive and unique is a biological illiterate. Impossible. Period. Creationism!

Well guess again. It seems that the bird voicebox, the syrinx, is biologically sui generis in the animal kingdom and “scientists have concluded that this voice box evolved only once, and that it represents a rare example of a true evolutionary novelty” (1). 

But surely they don’t mean ‘novelty’ when they say ‘novelty.’ Yup, that is exactly what they mean:

“It’s something that comes out of nothing,” says Denis Dubuole, a geneticist at the University of Geneva in Switzerland who was not involved with the work. “There is nothing that looks like a syrinx in any related animal groups in vertebrates. This is very bizarre.”

Now, as the little report indicates, true novelties are “hard to come by.” But, as the syrinx indicates, they are not conceptually impossible. It is biologically coherent to propose that these exist and that they can emerge. And that their distinctive properties are exactly what people like Chomsky have been suggesting is true of the recursive parts of FL (4).

They are innovations—new traits or new structures—that arise without any clear connections to existing traits or structures. 

Imagine that, no clear connections to other traits on other species or ancestors. Hmm. Are these guys really biologists? Probably not, or at least, not for long for very soon their credentials are sure to be revoked by the orthodox guardians of EvoLang. Save me! Save me! The discontinuitists are coming!

The report makes one more interesting observation: these kinds of qualitatively new innovations serve as interesting gateways for yet more innovation. Here, the development of the syrinx could have enabled songs to become more complex and biologists speculate that this might in turn have led to further speciation. In the language case, it is conceivable that the capacity for recursion in languageled to a capacity for recursion more generally in other cognitive domains. Think of arithmetic as a new song one can sing when hierarchical recursion has snuck in.  

Is all of this correct? Who knows? Today the claim is that the syrinx is a biological novelty. Tomorrow we might find out that it is less novel than currently advertised (recall for Minimalists, FL is unique but not thatunique. Just a teensy weensy bit unique). What is important is not whether it is unique, but the fact that biology and evolution and genetics have nothing against unique sui generic one of a kind features. They are rare, but not unheard of and not beyond the intellectual pale. That means that entertaining the possibility that something, say hierarchical recursion, is a unique cognitive capacity is not living out on the intellectual edge in evolutionary La-La land. It is a hypothesis and one that cannot be dismissed by assuming that this is not the way biology works or could work. It can so work and seems even to have done so on occasion. That means critics of the claim that language is a species specific capacity have to engage with the actual claims. Hand waving is simply dishonest (and you know who you are). 

Moreover, we know how to show that uniqueness claims are incorrect: just (ha!) show how to derive the properties of the assumed unique organ/capacity from more generic traits and show how the trait/organ under consideration could have continuously evolved from these using very itty bitty steps. Apparently, this was done for fingers and toes from fish fins. If you think that hierarchical recursion is “just more of the same” then find me the fins and show me the steps. If not, well, let’s just say, that the continuists have some work ahead of them (Lucy, you have some explaining to do) if they want to be taken seriously and that there is nothing biologically untoward or incoherent or wrong in assuming that sometimes, rarely but sometimes, novelties arise “without any clear connections to existing traits and structures.” And what better place to look for a discontinuity than in in language?

Let me end by adding two useful principles for future thinking on topics related to language and the mind:

1.     Chomsky is never (stupidly) wrong

2.     If you think that Chomsky is (stupidly) wrong go back to 1

Friday, September 28, 2018

Pulling back

Today marks FoL's sixth anniversary. I started FoL because I just could not stand reading the junk being written about Generative Grammar (GG) in the popular press. The specific occasion was some horrid coverage of Everett's work (By Bartlett in the Chronicle) on Piraha and its supposed significance for theories of FL/UG. The discussion was based on the most trivial misunderstanding of the GG enterprise, and I thought that I could help sort matters out and have some fun in the process. I did have fun. I have sorted things out. I have not stopped the garbage.

FoL has continued to try to deal with the junk by both pointing it out and then explaining how it was badly mistaken. This has, sadly, kept me quite busy. There is lots of misunderstanding out there and it never seems to lessen, no matter how many stakes get driven into the hearts of the extremely poor arguments.

In addition to regularly cleaning out the Augean stables, I have also written on other issues that amuse me: Rationalism vs Empiricism, Fodor and representationalism, big data, deep learning, mentalism, PoS argumentation, languistics vs linguistics, universals, Evo Lang, minimalism and its many many virtues, minimalism and its obscurities, minimalism and how to do it right, minimalism and how/why people misinterpret it, computationalism and its implications for cog-neuro, the greatness of Randy's work, interesting findings in neuro that support GG, how to bring near and ling closer together (Yay to Embick and Poeppel), and more. It's been a busy 6 years.

In fact, here is how busy. I calculate that I've written about 1 long post per week for the last 6 years. A long post is 4 pages or more. I have also put up shorter ones so that overall I have posted upwards of 600 pieces. And I have enjoyed every minute of this.

However, doing this (finding the articles, reading them, writing the posts, responding to comments, cleaning the site) has taken up a lot of my time, and as I want to write one last book before I call it quits (I am getting very old!), I have decided that I have to cut back. My current plan is to write maybe one post a month, if that. This will allow me to write my magnum opus (this is a joke!) which will be an even more full throated defense of the unbelievable success of the Minimalist Program. It really has been marvelous and I intend to show exactly how marvelous in about 150 (not more or nobody will take a look) fun filled pages. If all goes well, I might even post versions in FoL for comment.

This is all a long-winded way of saying that I will be posting much less often and to thank you for reading, commenting and arguing with me for the last 6 years. I have learned a lot and enjoyed every minute. But time for a break.

One last point: if anyone wishes to post to FoL I am open to looking at things to put up. We still have others that will be contributing content and I am happy to curate more if it comes my way. So feel free to jump in. And again, thx.

Linguistic experiments

How often do we test our theories and basic concepts in linguistics? I don’t know for sure, but my hunch is that it is not that often. Let me explain.

One of the big ideas in the empirical sciences is the notion of the crucial experiment (or “experimentum crucis” (EC) for those of you who prefer “ceteris paribus” to “all things being equal” (psst, I am one of those so it is ‘EC’ from now on) (see here). What is an EC?  Wikepedia says the following:

In the sciences, an experimentum crucis (English: crucial experiment or critical experiment) is an experiment capable of decisively determining whether or not a particular hypothesis or theory is superior to all other hypotheses or theories whose acceptance is currently widespread in the scientific community. In particular, such an experiment must typically be able to produce a result that rules out all other hypotheses or theories if true, thereby demonstrating that under the conditions of the experiment (i.e., under the same external circumstancesand for the same "input variables" within the experiment), those hypotheses and theories are proven false but the experimenter's hypothesis is not ruled out.

The most famous experiments in the sciences (e.g. Michelson-Morley on Special Relativity, Eddington’s on General Relativity, Aspect on Bell’s inequality) are ECs, including those that were likely never conducted (e.g. Galileo’s dropping things from the tower). What makes them critical is that they are able to isolate a central feature of a theory or a basic concept for test in a local environment where it is possible to control for the possible factors. We all know (or we all shouldknow) that it is very hard to test an interesting theoretical claim directly.[1]As the quote above notes, the test critically relies on carefully specifying the “conditions of the experiment” so as to be able to isolate the principle of interest enough for an up or down experimental test.

What happens in such an experiment? Well, we set up ancillary assumptions that are well grounded enough to allow the experiment to focus on the relevant feature up for test. In particular, if the ancillary assumptions are sufficiently well grounded in the experimental situation then the proposition up for test will be the link in the deductive structure of the set up that is most exposed by the test. 

Ancillary assumptions are themselves empirical and hence contestable. That is why ECs are so tough to dream up: to be effective these ancillary assumptions must in the context of the experimental set upbe stronger than the theoretical item they are being used to test. If they are weaker than the proposition to be tested then the EC cannot decisively test that proposition. Why? Well, the ancillary assumption(s) will be weaker links in the chain of experimental reasoning and an experimental result can always be correctlycausally attributed to the weaker ancillary assumptions. This will spare exposure of the theoretically principle or concept of interest directly to the test. However, and this is the important thing, it is possible in a given contextto marshal enough useful ancillary assumptions that are better grounded in that contextthan the proposition to be tested. And when this is possible the conditions for an EC are born.

As I noted, I am not sure that we linguists do much ECing. Yes, we argue for and against hypotheses and marshal data to those ends, but it is rare that we set things up to manufacture a stable EC. Here is what I mean.

A large part of linguistic work aims less to test a hypothesis than to apply it (and thereby to possibly(not this is a possibility, not a necessity) refine it). For example, say I decide to work on a certain construction C in a certain language L. Say C has some focus properties, namely when the expression appears in a designated position distinct from its “base” position it bears a focus interpretation. I then analyze the mechanisms underlying this positioning. I usemovement theory to triangulate on the kind of operation might be involved. I test this assumption by seeing if it meets the strictures of Subjacency Theory (allows unbounded dependencies yet obeys islands) and if it does, I conclude it is movement. I then proceed to describe some of the finer points of the construction given that it is an A’-movement operation. This might force a refinement of the notion of movement, or island or, phase to capture all the data, but the empirical procedure presupposes that the theory we entered the investigation with is on the right track though possibly in need of refinement within the grammar of L. The empirical investigation’s primary interest is in describing C in L and in service of this it will refine/revise/repurpose (some) principles of FL/UG. 

This sort of work, no matter how creative and interesting is unlikely to lead to a EC of the principles of FL/UG precisely because of its exploratory nature. The principles are more robust than the ancillary assumptions we will make to fit the facts. And if this is so, we cannot use the description to evaluate the basic principles. Quite the contrary. So, this kind of work, which I believe describes a fair chunk of what gets done, will not generally serve EC ends.

There is a second impediment to ECs in linguistics. More often than not the principles are too gauzy to be pinned down for direct test. Take for example the notion of “identity” or “recoverability.” Both are key concepts in the study of ellipsis, but, so far as I can tell, we are not quite sure how to specify them. Or maybe a more accurate claim would be is that we have many many specifications. Is it exact syntactic identity? Or identity as non-distinctness? Or propositional (semantic) identity? Identity of what object at what level?  We all know that something likeidentity is critical, but it has proven to be very hard to specify exactly what notion is relevant. And of course, because of this, it is hard to generate ECs to test these notions. Let me repeat: the hallmark of a good EC is its deductive tightness. In the experimental situation the experimental premises are tight enough and grounded enough to focus attention on the principle/concept of interest. Good ECs are very tight deductive packages. So constructing effective ones is hard and this is why, I believe, there are not many ECs in linguistics.

But this is not always so, IMO. Here are some example ECs that have convinced me.

First: It is pretty clear that we cannot treat case as a byproduct of agreement. What’s the EC?[2]Well one that I like involves the Anaphor Agreement Effect (AAE). Woolford (refining Rizzi) observed that reflexives cannot sit in positions where they would have to value agreement features on a head. The absence of nominative reflexives in languages like English illustrates this. The problem with them is not that they are nominatively case marked, but that they must value the un-valued phi features of T0and they cannot do this. So, AAE becomes an excellent phi-feature detector and it can be put to use in an EC: if case is a byproduct of phi-feature valuation then we should never find reflexives in (structurally) case marked positions. This is a direct consequence of the AAE. But we do regularly find reflexives in non-nominative positions, hence it must be possible to assign case without first valuing phi-features. Conclusion: case assignment need not piggy back on phi-feature valuation. 

Note the role that the AAE plays in this argument. It is a relatively simple and robust principle. Moreover, it is one that we would like to preserve as it explains a real puzzling fact about nominative reflexives: they don’t robustly exist! And where we do find them, they don’t come from T0s with apparent phi-features and where we find other case assigning heads that do have unvalued phi-features we don’t find reflexives. So, all in all, the AAE looks like a fairly decent generalization and is one that we would like to keep. This makes it an excellent part of a deductive package aimed at testing the idea that case is parasitic on agreement as we can lever its retention into an probe of some idea we want to explore. If AAE is correct (main assumption), then if case is parasitic on agreement we shouldn’t see reflexives in case positions that require valuing phi features on a nearby head. If case is not parasitic on phi valuation then we will. The experimental verdict is that we do find reflexives in the relevant domains and the hypothesis that case and phi-feature valuation are two sides of the same coin sinks. A nice tight deductive package. An EC with a very useful result.

Second: Here’s a more controversial EC, but I still think is pretty dispositive. Inverse control provides a critical test for PRO based theories of control. Here’s the deductive package: PRO is an anaphoric dependent of its controller. Anaphoric dependents can never c-command their antecedents as this would violate principle C. Principle C is a very robust characteristic of binding configurations. So, a direct consequence of PRO based accounts of control is the absence of inverse control configurations, configurations in which “PRO” c-commands its antecedent. 

This consequence has been repeatedly tested since Polinksy and Potsdam first mooted the possibility in Tsez and it appears that inverse control does indeed exist. But regardless of whether you are moved by the data, the logic is completely ECish and unless there is something wrong with the design (which I strongly doubt) it settles the issue of whether Control is a DP-PRO dependency. It cannot be. Inverse control settles the matter. This has the nice consequence that PRO does not exist. Most linguists resist this conclusion but, IMO, that is because they have not fully taken on board the logic of ECs.

Here’s a third and last example: are island effects complexity effects or structural effects? In other words, are island effects the reflections of some generic problem that islands present cognition with or something specific to the structural properties of islands? The former would agree that island effects exist but that they are due to, for example, short term memory overload that the parsing of islands induces. 

The two positions are both coherent and, truth be told, for theoretical reasons, I would rather that the complexity story were the right one. It would just make my life so much easier to be able to say that island effects were not part of my theoretical minimalist remit. I could then ignore them because they are not really reflections of the structure of FL/UG and so I would not have to try and explain them! Boy would that be wonderful! But much as I would love this conclusion, I cannot in good scientific conscience adopt it for Sprouse and colleagues have done ECs showing that it is very very likely wrongwrong. I refer you to the Experimental Syntax volume Sprouse and I edited for discussion (see here) and details. 

The gist of the argument is that were islands reflexes of things like memory limitations then we should be able to move island acceptability judgments around by manipulating the short term memory variable. And we can do this. Humans come in strong vs weak short term memory capacities. We even have measures of these. Were island effects reflections of such memory capacity, then island effects would differentially affect these two groups. They don’t so it’s not. Again the EC comes in a tight little deductive box and the experiment (IMO) decisively settles the matter. Island effects, despite my fondest wishes really do reflect something about the structurallinguisticproperties of islands. Damn!

So, we have ECs in linguistics and I would like to see many more. Let me end by saying why.  I have three reasons.

First, it would generate empirical work directly aimed at theoretically interesting issues. The current empirical investigative instrument is the analysis, usually of some construction or paradigm. It starts with an empirical paradigm or construction in some L and it aims at a description and explanation for that paradigm’s properties. This is a fine way to proceed and it has served us well. This way of proceeding is particularly apposite when we are theory poor for it relies on the integrity of the paradigm to get itself going and reaches for the theory in service of a better description and possible explanation. And, as I said, there is nothing wrong with this. However, though it confronts theory, it does so obliquely rather than directly. Or so it looks to me.

To see this, contrast this with the kind of empirical work we see more often in the rest of the sciences. Here empirical work is experimental. Experiments are designed to test the core features of the theory. This requires, first, identifying and refining the key features of the leading ideas, massaging them, explicating them and investigating their empirical consequences. Once done, experiments aim to find ways of making these consequences empirically visible. Experiments, in other words, require a lot of logical scaffolding. They are not exploratory but directed towards specific questions, questions generated by the theories they are intended to test. Maybe a slogan would help here: linguistics has lots of exploratory work, some theoretical work but only a smidgen of experimental work. We could do with some more.

Second, experiments would tighten up the level of argument. I mentioned that ECs come as tight deductive packages. The assumptions, both what is being tested and the ancillary hypotheses must be specified for an EC to succeed. This is less the case for exploratory work. Here we need to string together principles and facts in a serviceable way to cover the empirical domain. This is different from building an airtight box to contain it and prod it and test it. So, I think that a little more experimental thinking would serve to tighten things up.

Third, the main value of ECs is that it eliminates theoretical possibilities and so allows us to more narrowly focus theory construction. For example, if case is not parasitic on agreement then this suggests different theories of case than ones where they must swing together. Similarly, if PRO does not exist, then theories that rely on PRO are off on the wrong track, no matter how descriptively useful they might be. The role of experiments, in the best of all possible worlds, is to discard attractive but incorrect theory. This is what empirical work is for, to dispose. Now, we do not (and never will) live in the best of all possible scientific worlds. But this does not mean that getting a good bead on the empirical standing of our basic concepts experimentally is not useful. 

Let me finish by adding one more thing. Our friends in psycho ling do experiments all the time. Their culture is organized around this procedure. That’s why I have found going to their lab meetings so interesting. I think that theories in Ling are far better grounded and articulated than theories in psycho-ling (that is my personal opinion) but their approach often seems more direct and reasonable. If you have not been in the habit of sitting in on their lab meetings, I would recommend doing so. There is a lot to recommend the logic of experimentation that is part of their regular empirical practice.


[1]Part of the problem with languists’ talking about Chomsky’s linguistic conception of universals is that they do not appreciate that simply looking at surface forms is unlikely to bear much on the claim being made. Grammars are not directly observable. Languists take this to imply that Chomskyan universals are not testable. But this is not so. They are not triviallytestable, which is a whole different matter. Nothing interesting is trivially testable. It requires all sorts of ancillary hypotheses to set the stage for isolating the relevant principle of interest. And this takes lots of work. 
[2]This is based on discussions with Omer. Thx.

Wednesday, September 19, 2018

Generative grammar's Chomsky Problem

Martin Haspelmath (MP) and I inhabit different parts of the (small) linguistics universe. Consequently, we tend to value very different kinds of work and look to answer very different kinds of questions. As a result, when our views converge, I find it interesting to pay attention. In what follows I note a point or two of convergence. Here is the relevant text that I will be discussing (Henceforth MHT (for MHtext)).[1]

MHT’s central claim is that “Chomsky no longer argues for a rich UG of the sort that would be relevant for the ordinary grammarian and, e.g. for syntax textbooks” (1). It extends a similar view to me: “even if he is not as radical about a lean UG as Chomsky 21stcentury writings (where nothing apart from recursion is UG), Hornstein’s view is equally incompatible with current practice in generative grammar” (MHT emphasis, (2)).[2]

Given that neither Chomsky nor I seems to be inspiring current grammatical practice (btw, thx for the company MH), MHT notes that “generative grammarians currently seem to lack an ideological superstructure.” MHT seems to suggest that this is a problem (who wants to be superstructure-less after all?), though it is unclear for whom, other than Chomsky and me (what’s a superstructure anyhow?). MHT adds that Chomsky “does not seem to be relevant to linguistics anymore” (2).

MHT ends with a few remarks about Chomsky on alien (as in extra-terrestial) language, noting a difference between him and Jessica Coon on this topic. Jessica says the following (2):

 When people talk about universal grammar it’s just the genetic endowment that allows humans to acquire language. There are grammatical properties we could imagine that we just don’t ever find in any human language, so we know what’s specific to humans and our endowment for language. There’s no reason to expect aliens would have the same system. In fact, it would be very surprising if they did. But while having a better understanding of human language wouldn’t necessarily help, hopefully it’d give us tools to know how we might at least approach the problem.

This is a pretty vintage late 1980s bioling view of FL. Chomsky demurs, thinking that perhaps “the Martian language might not be so different from human language after all” (3). Why? Because Chomsky proposes that many features of FL might be grounded in generic computational properties rather than idiosyncratic biological ones. In his words:

We can, in short, try to sharpen the question of what constitutes a principled explanation for properties of language, and turn to one of the most fundamental questions of the biology of language: to what extent does language approximate an optimal solution to conditions that it must satisfy to be usable at all, given extralinguistic structural architecture?” 

MHT finds this opaque (as do I actually) though the intent is clear: To the degree that the properties of FL and the Gs it gives rise to are grounded in general computational properties, properties that a system would need to have “to be usable at all” then to that degree there is no reason to think that these properties would be restricted to human language (i.e. there is no reason to think that they would be biologically idiosyncratic). 

MHT’s closing remark about this is to reiterate his main point: “Chomsky’s thinking since at least 2002 is not really compatible with the practice of mainstream generative grammar” (3-4).

I agree with this, especially MHT's remark about current linguistic practice. Much of what interests Chomsky (and me) is not currently high up on the GG research agenda. Indeed, I have argued (herethat much of current GG research has bracketed the central questions that originally animated GG research and that this change in interests is what largely lies behind the disappointment many express with the Minimalist Program (MP). 

More specifically, I think that though MP has been wildly successful in its own terms and that it is the natural research direction building on prior results in GG, its central concerns have been of little mainstream interest. If this assessment is correct, it raises a question: why the mainstream disappointment with MP and why has current GG practice diverged so significantly from Chomsky’s? I believe that the main reason is that MP has sharpened the two contradictory impulses that have been part of the GG research program from its earliest days. Since the beginning there has been a tension between those mainly interested in the philological details of languages and those interested in the mental/cognitive/neuro implications of linguistic competence.

We can get a decent bead on the tension by inspecting two standard answers to a simple question: what does linguistics study? The obvious answer is language. The less obvious answer is the capacity for language (aka, linguistic competence). Both are fine interests (actually, I am not sure that I believe this, but I want to be concessive (sorry Jerry)). And for quite a while it did not much matter to everyday research in GG which interest guided inquiry as the standard methods for investigating the core properties of the capacity for language proceeded via a filigree philological analysis of the structures of language. So, for example, one investigated the properties of the construal modules by studying the distribution of reflexives and pronouns in various languages. Or by studying the locality restrictions on questions formation (again in particular languages) one could surmise properties of the mentalist format of FL rules and operations. Thus, the way that one studied the specific cognitive capacity a speaker of a particular language L had was by studying the details of the language L and the way that one studied more general (universal) properties characteristic of FL and UG was by comparing and contrasting constructions and their properties across various Ls. In other words, the basic methods were philological even if the aims were cognitive and mentalisic.[3]And because of this, it was perfectly easy for the work pursued by the philologically inclined to be useful to those pursuing the cognitive questions and vice versa. Linguistic theory provided powerful philological tools for the description of languages and this was a powerful selling point. 

This peaceful commensalism ends with MP. Or, to put it more bluntly, MP sharpens the differences between these two pursuits because MP inquiry only makes sense in a mentalistic/cognitive/neuro setting. Let me explain.

Here is very short history of GG. It starts with two facts: (1) native speakers are linguistically productive and (2) any human can learn any language. (1) implies that natural languages are open ended and thus can only be finitely characterized via recursive rule systems (aka grammars (Gs)). Languages differ in the rules their Gs embody. Given this, the first item on the GG research agenda was to specify the kinds of rules that Gs have and the kinds of dependencies Gs care about. Given an inventory of such rules sets up the next stage of inquiry.

The second stage begins with fact (2). Translated into Gish terms it says that any Language Acquisition Device (aka, child) can acquire any G. We called this meta-capacity to acquire Gs “FL” and we called the fine structure of FL “UG.” The fact that any child can acquire any G despite the relative paucity and poverty of the linguistic input data implies that FL has some internal structure. We study this structure by studying the kinds of rules that Gs can and cannot have. Note that this second project makes little sense until we have candidate G rules. Once we have some, we can ask why the rules we find have the properties they do (e.g. structure dependence, locality, c-command). Not surprisingly then, the investigation of FL/UG and the investigation of language particular Gs naturally went hand in hand and the philological methods beloved of typologists and comparative grammarians led the way. And boy did they lead! GB was the culmination of this line of inquiry. GB provided the first outlines of what a plausible FL/UG might look like, one that had grounding in facts about actual Gs. 

Now, this line of research was, IMO, very successful. By the mid 90s, GG had discovered somewhere in the vicinity of 25-35 non-trivial universals (i.e. design features of FL) that were “roughly” correct (see here for a (partial) list). These “laws of grammar” constitute, IMO, a great intellectual achievement. Moreover, they set the stage for MP in much the way that the earlier discovery of rules of Gs set the stage for GB style theories of FL/UG. Here’s what I mean.

Recall that studying the fine structure of FL/UG makes little sense unless we have candidate Gs and a detailed specification of some of their rules. Similarly, if one’s interest is in understanding why our FL has the properties it has, we need some candidate FL properties (UG principles) for study. This is what the laws of grammar provide; candidate principles of FL/UG. Given these we can now ask why we have these kinds of rules/principles and not other conceivable ones. And this is the question that MP sets for itself: why this FL/UG? MP, in short, takes as its explanadum the structure of FL.[4]

Note, if this is indeed the object of study, then MP only makes sense from a cognitive perspective. You won’t ask why FL has the properties it has if you are not interested in FL’s properties in the first place. So, whereas the minimalist program so construed makes sense in a GG setting of the Chomsky variety where a mental organ like FL and its products are the targets of inquiry, it is less clear that the project makes much sense if ones interests are largely philological (in fact, it is pretty clear to me that it doesn’t). If this is correct and if it is correct that most linguists have mainly philological interests then it should be no surprise that most linguists are disappointed with MP inquiry. It does not deliver what they can use for it is no longer focused on questions analogous to the ones that were prominent before and which had useful spillover effects. The MP focus is on issues decidedly more abstract and removed from immediate linguistic data than heretofore. 

There is a second reason that MP will disappoint the philologically inclined. It promotes a different sort of inquiry. Recall that the goal is explaining the properties of FL/UG (i.e. the laws of grammar are the explanada). But this explanatory project requires presupposing that the laws are more or less correct. In other words, MP takes GB as (more or less) right.[5] MP's added value comes in explaining it, not challenging it. 

In this regard, MP is to GB what Subjacency Theory is to Ross’s islands. The former takes Ross’s islands as more or less descriptively accurate and tries to derive them on the basis of more natural assumptions. It would be dumb to aim at such a derivation if one took Ross’s description to be basically wrong headed. So too here. Aiming to derive the laws of grammar requires believing that these are basically on the right track. However, this means that so far as MP is concerned, the GBish conception of UG, though not fundamental, is largely empirically accurate. And this means that MP is not an empirical competitor to GB. Rather, it is a theoretical competitor in the way that Subjacency Theory is to Ross’s description of islands. Importantly, empirically speaking, MP does not aim to overthrow (or even substantially revise the content of) earlier theory.[6]

Now this is a problem for many working linguists. First, many don’t have the same sanguine view that I do of GB and the laws it embodies. In fact, I think that many (most?) linguists doubt that we know very much about UG or FL or that the laws of grammar are even remotely correct. If this is right, then the whole MP enterprise will seem premature and wrong headed to them.  Second, even if one takes these as decent approximations to the truth, MP will encourage a kind of work that will be very different from earlier inquiry. Let me explain.

The MP project so conceived will involve two subparts. The first one is to derive the GB principles. If successful, this will mean that we end up empirically where we started. If successful, MP will recover the content of GB. Of course, if you think GB is roughly right, then this is a good place to end up. But the progress will be theoretical not empirical. It will demonstrate that it is reasonable to think that FL is simpler than GB presents it as being. However, the linguistic data covered will, at least initially, be very much the same. Again, this is a good thing from a theoretical point of view. But if one’s interests are philological and empirical, then this will not seem particularly impressive as it will largely recapitulate GB's empirical findings, albeit in a novel way.

The second MP project will be to differentiate the structure of FL and to delineate those parts that are cognitively general from those that are linguistically proprietary. As you all know, the MP conceit is that linguistic competence relies on only a small cognitive difference between us and our apish cousins. MP expects FL’s fundamental operations and principles to be cognitively and computationally generic rather than linguistically specific. When Chomsky denies UG, what he denies is that there is a lot of linguistic specificity to FL (again: he does not deny that the GB identified principles of UG are indeed characteristic features of FL). Of course, hoping that this is so and showing that it might be/is are two very different things. The MP research agenda is to make good on this. Chomsky’s specific idea is that Merge and some reasonable computational principles are all that one needs. I am less sanguine that this is all that one needs, but I believe that a case can be made that this gets one pretty far. At any rate, note that most of this work is theoretical and it is not clear that it makes immediate contact with novel linguistic data (except, of course, in the sense that it derives GB principles/laws that are themselves empirically motivated (though recall that these are presupposed rather than investigated)). And this makes for a different kind of inquiry than the one that linguists typically pursue. It worries about finding natural more basic principles and showing how these can be deployed to derive the basic features of FL. So a lot more theoretical deduction and a lot less (at least initially) empirical exploration.

Note, incidentally, that in this context, Chomsky’s speculations about Martians and his disagreement with Coons is a fanciful and playful way of making an interesting point. If FL’s basic properties derive from the fact that it is a well designed computational system (its main properties follow from generic features of computations), then we should expect other well designed computational systems to have similar properties. That is what Chomsky is speculating might be the case. 

So, why is Chomsky (and MP work more generally) out of the mainstream? Because mainstream linguistics is (and has always been IMO) largely uninterested in the mentalist conception of language that has always motivated Chomsky’s view of language. For a long time, the difference in motivations between Chomsky and the rest of the field was of little moment. With MP that has changed. The MP project only makes sense in a mentalist setting and invites decidedly philologically  projects without direct implications for further philological inquiry. This means that the two types of linguistics are parting company. That’s why many have despaired about MP. It fails to have the crossover appeal that prior syntactic theory had. MHT's survey of the lay of the linguistic land accurately reflects this IMO.

Is this a bad thing? Not necessarily, intellectually speaking. After all, there are different projects and there is no reason why we all need to be working on the same things, though I would really love it if the field left some room for the kind of theoretical speculation that MP invites.

However, the divergence might be sociologically costly. Linguistics has gained most of its extra mural prestige from being part of the cog-neuro sciences. Interestingly, MP has generated interest in that wider world (and here I am thinking cog-neuro and biology). Linguistics as philology is not tethered to these wider concerns. As a result, linguistics in general will, I believe, become less at the center of general intellectual life than it was in earlier years when it was at the center of work in the nascent cognitive and cog-neuro sciences. But I could be wrong. At any rate, MHT is right to observe that Chomsky’s influence has waned within linguistics proper. I would go further. The idea that linguistics is and ought to be part of the cog-neuro sciences is, I believe, a minority position within the discipline right now. The patron saint of modern linguistics is not Chomsky, but Greenberg. This is why Chomsky has become a more marginal figure (and why MH sounds so delighted). I suspect that down the road there will be a reshuffling of the professional boundaries of the discipline, with some study of language of the Chomsky variety moving in with cog-neuro and some returning to the language departments. The days of the idea of a larger common linguistic enterprise, I believe, are probably over.


[1]I find that this is sometimes hard to open. Here is the url to paste in:
https://dlc.hypotheses.org/1269 

[2]I should add that I have a syntax textbook that puts paid to the idea that Chomsky’s basic current ideas cannot be explicated in one. That said, I assume that what MHT intends is that Chomsky’s views are not standard text book linguistics anymore. I agree with this, as you will see below.
[3]This was and is still the main method of linguistic investigation. FoLers know that I have long argued that PoS style investigations are different in kind from the comparative methods that are the standard and that when applicable they allow for a more direct view of the structure of FL. But as I have made this point before, I will avoid making it here. For current purposes, it suffices to observe that whatever the merits of PoS styles of investigation, these methods are less prevalent than the comparative method is.
[4]MHT thinks that Chomsky largely agrees with anti UG critics in “rejecting universal grammar” (1). This is a bit facile. What Chomsky rejects is that the kinds of principles we have identified as characteristic of UG are linguistically specific. By this he intends that they follow from more general principles. What he does not do, at least this is not what Ido, is reject that the principles of UG as targets of explanation. The problem with Evans and Levinson and Ibbotson and Tomasello is that their work fails to grapple with what GG has found in 60 years of research. There are a ton of non-trivial Gish facts (laws) that have been discovered. The aim is to explain these facts/laws and ignoring them or not knowing anything about them is not the same as explaining them. Chomsky “believes” that language has properties that previous work on UG ahs characterized. What he is questioning is whether theseproperties are fundamental or derived. The critics of UG that MHT cites have never addressed this question so they and Chomsky are engaged in entirely different projects. 
            Last point: MHT notes that neophytes will be confused about all of this. However, a big part of the confusion comes from people telling them that Chomsky and Evans/Levinson and Ibbotson/Tomasello are engaged in anything like the same project.
[5]Let me repeat for the record, that one can do MP and presuppose some conception of FL other than GB. IMO, most of the different “frameworks” make more or less the same claims. I will stick to GB because this is what I know best andMP indeed has targeted GB conceptions most directly.
[6]Or, more accurately, it aims to preserve most of it, just as General Relativity aimed to preserve most of Newtonian mechanics.

Wednesday, September 12, 2018

The neural autonomy of syntax

Nothing does language like humans do language. This is not a hypothesis. It is a simple fact. Nonetheless, it is often either questioned or only reluctantly conceded. Therefore, I urge you to repeat the first sentence of this post three times before moving forward. It is both true and a truism. 

Let’s go further. The truth of this observation suggests the following non-trivial inference: there is something biologically special about humans that enables them (us) to be linguistically proficient andthis special mental power is linguistically specific. In other words, humans are uniquely cognitively endowed as a matter of biology when it comes to language and this biological gift is tailored to track some specific cognitive feature of language rather than (for example) being (just!) a general increase in (say)generalbrain power. On this view, the traditional GG conception stemming from Chomsky takes FL to be both species specific and domain specific. 

Before proceeding, let me at once note that these are independent specificity theses. I do this because every time I make this point, others insist in warning me that the fact mentioned in the first sentence does not imply the inference I just drew in the second paragraph. Quite right. In fact: 

It is logically possible that linguistic competence supervenes on no domain specific capacities but is still species specific in that only humans have (for example) sufficiently powerful general brains to be linguistically proficient. Say, for example, linguistic competence requires at least 500 units of cognitive power (CP) and only human brains can generate this much CP. However, modulo the extra CPs, the mental “programs” the CPs drive are the same as those that (at least some) other cognitive creatures enjoy, they just cannot drive them as fast or as far because of mileage restrictions imposed by low CP brains.

Similarly, it is logically possible that animals other than humans have domain specific linguistic powers. It is conceivable that apes, corvids, platypuses, manatees, and Portuguese water dogs all have brains that include FLs just like ours that are linguistically specific (e.g. syntax focused and not exercised in other cognitive endeavors). Were this so, then both they and we would have brains with specific linguistic sensitivities in virtue of having brains with linguistically bespoke wiring/circuitry or whatever specially tailored brain ware makes FL brains special. Of course, were I one of them I would keep this to myself as humans have the unfortunate tendency of dismembering anything that might yield scientific insight (or just might be tasty). If these other animals actually had an FL I am pretty sure some NIH scientist would be trying to figure out how to slice and dice their brains in order to figure out how its FL ticks.

So, both options are logically possible, but, the GG tradition stemming from Chomsky (and this includes yours truly, a fully paid up member of this tribe) has doubted that these logical options are live and that when it comes to language onlywe humans are built for it and what makes our cognitive profile special is a set of linguistically specific cognitive functions built into FL and dedicated to linguistic cognition. Or, to put this another way, FL has some special cognitive sauce that allows us to be as linguistically adept as we evidently are and we alone have minds/brains with this FL.

Nor do the exciting leaps of inference stop here. GG has gone even further out on the empirical limb and suggested that the bespoke property of FL that makes us linguistically special involves an autonomous SYNTAX (i.e. a syntax irreducible to either semantics or phonology and with its own special combinatoric properties). That’s right readers, syntax makes the linguistic world go round and only we got it and that’s why we are so linguistically special![1]Indeed, if a modern linguistic Ms or Mr Hillel were asked to sum up GG while standing on one foot s/he could do worse than say, only humans have syntax, all the rest is commentary.

This line of reasoning has been (and still is) considered very contentious. However, I recently ran across a paper by Campbell and Tyler (here, henceforth C&T) that argues for roughly this point (thx to Johan Bolhuis and William Matchin for sending it along). The paper has several interesting features, but perhaps the most intriguing (to me) is that Tyler is one of the authors. If memory serves, when I was growing up, Tyler was one of those who were very skeptical that there was anything cognitively special about language. Happily, it seems that times have changed.

C&T argues that brain localizes syntactic processing in the left frontotemporal lobe and “makes a strong case for the domain specificity of the frontotemporal syntax system and its autonomy from domain-general networks” (132). So, the paper argues for a neural version of the autonomy of syntax thesis. Let me say a few more words about it.

First, C&T notes that (of course) the syntax dedicated part of the brain regularly interacts with the non-syntactic domain general parts of the brain. However, the paper rightly notes that this does not argue against the claim that there is an autonomous syntactic system encoded in the brain. It merely means that finding it will be hard as this independence will often be obscured.  More particularly C&T says the activation of the domain general systems only arise “during task based language comprehension” (133). Tasks include having to make an acceptability judgment. When we focus on pure comprehension, however, without requiring any further “task” we find that “only the left-laterilized frontotemporal syntax system and auditory networks are activated” (133). Thus, the syntax system only links to the domain general ones during “overt task performance” and otherwise activates alone. C&T note that this implies that the syntactic system alone is sufficient for syntactic analysis during language comprehension.

Second, C&T argue that arguments against the neural autonomy of syntax rest on bad definitions of domain specificity. More particularly, according to C&T the benchmarks for autonomy in other studies beg the autonomy question by embedding a “task” in the measure and so “lead to the activation of additional domain-general regions” (133). As C&T notes, when such “tasks” are controlled for, we only find activation in the syntax region.

Third, the relevant notion of syntax is the one GGers know and love. For C&T takes syntax to be prime species specific feature of the brain and understands syntax in GGish terms to be implicated in “the construction of hierarchical syntactic structures.” C&T contrasts hierarchical relations with “adjacency relationships” which it claims “both human and non-human primates are sensitive to” (134). This is pretty much the conventional GG view and C&T endorses it.

And there is more. C&T endorses the Hauser, Chomsky, Fitch distinction between FLN and FLB. This is not surprising for once one adopts an autonomy of syntax thesis and appreciates the uniqueness of syntax in human minds/brains the distinction follows pretty quickly. Let me quote C&T (135):

In this brief overview, we have suggested that it is necessary to take a more nuanced view to differentiating domain-general and domain-specific components involved in language. While syntax seems to meet the criteria for domain-specificity….there are other key components in the wider language system which are domain-general in that they are also involved in a number of cognitive functions which do not involve language.

C&T has one last intriguing feature, at least for a GGer like me. The name ‘Chomsky’ or the terms ‘generative grammar’ are never mentioned, not even once (shades of Voldemort!). Quite clearly, the set of ideas that the paper explores presupposes the basic correctness of the Chomskyan generative enterprise. C&T arugues for a neural autonomy of syntax thesis and, in doing so, it relies on the main contours of the Chomsky/GG conception of FL. Yes, if C&T is correct it adds to this body of thought. But it clearly relies on it’s main claims and presupposes their essential correctness. A word to this effect would have been nice to see. That said, read the paper. Contrary to the assumptions of many, it argues that for a cog-neuro conception of the Chomsky conception of language. Even if it dares not speak his name.


[1]I suspect that waggle dancing bees and dead reckoning insects also non verbally advance a cognitive exceptionalism thesis and preen accordingly.