Comments

Showing posts with label Cederic Boeckx. Show all posts
Showing posts with label Cederic Boeckx. Show all posts

Tuesday, April 18, 2017

Inference to some great provocations

David Berlinsky is editing a newish online magazine Inference that articles in which I have mentioned in several previous posts. The latest issue is full of fun for linguists as there are four articles of immediate relevance. Here’s the link for the issue. Let me say a word or two about the pieces.

The first is an essay by Chomsky that goes over familiar ground regarding the distinctive nature of human linguistic capacity. He observes that this observation has a Cartesian pedigree and that language was recognized as distinctive (all and only humans have it) and wondrous (it was free and capable of expressing unboundedly many thoughts) and demanding of some kind of explanation (it really didn’t fit in well with what was understood to be the causal structure of the physical world) as early as it was noticed.

As Chomsky notes, Cartesians had relatively little of substance to say about the underpinnings of this wondrous capacity, mainly because the 17th century lacked the mathematical tools for the project. They had no way of describing how it was possible to “make infinite use of finite means” as von Humboldt put it (2). This changed in the 20th century with Church, Godel, Post and Turing laying the foundations of computation theory. This work “demonstrated how a finite object like the brain could generate an infinite variety of expressions.” And as a result, “[i]t became possible, for the first time, to address part of” the problem that the Cartesians identified (2).

Note the ‘part of’ hedge. As Chomsky emphasizes, the problem the Cartesians identified has two parts. The first, and for them the most important feature, is the distinction between “inclined” vs “impelled” behavior (3). Machines are impelled to act, never “inclined.” Humans, being free agents, are most often “inclined” (though they can be “compelled” as well). Use of language is the poster child for inclined behavior. Cartesians had no good understanding of the mechanics of inclination. As Chomsky observes, more than 300 years later, neither do we. As he puts it, language’s “free creative use remains a mystery,” as does free action in general (e.g. raising one’s hand) (3).

The second part, one that computation theory has given us a modest handle on, is the unbounded nature of the thoughts we can express. This feature very much impressed Galileo and Arnauld & Lancelot and von Humboldt, and it should impress you too! The “infinite variety” of meaningfully distinct expressions characteristic of human language “surpasse[s] all stupendous inventions” (1).  Chomsky has redubbed this feature of language “the Basic Property” (BP). BP refers to a property of the human brain, “the language faculty,” and its capacity to “construct a digitally infinite array of structured expressions” each of which “is semantically interpreted as expressing a thought, and each can be externalized by some sensory modality such as speech” (2).  BP is what GG has been investigating for the last 60 years or so. Quite a lot has been discovered about it (and yes, there is still lots that we don’t know!).

Chomsky emphasizes something that is worth reemphasizing: these facts about language are not news. That humans have linguistic creativity in the two senses above should not really be a matter of dispute. That humans do language like no other animal does should also be uncontroversial. How we do this is a very tough question, only a small part (very small part) of which we have managed to illuminate. It is sad that much debate still circulates around the whether question rather than the how. It is wasted time.

An important theme in Chomsky’s essay turns on how the world looks when we have no idea what’s up. Here is a quote that I believe all good scientifically inclined GGers should have tattooed to themselves (preferably in some discrete place) (3):

When understanding is thin, we expect to see extreme variety and complexity.

Absolutely! Variety and complexity are hallmarks of ignorance. And this is why progress and simplicity go hand in hand. And this is why I have clasped to my heart Dresher’s apposite dictum: There should be only two kinds of papers in linguistics: (i) papers that show that two things that look completely different are roughly the same and (ii) papers that show that two things that are roughly the same are in fact identical. These are the papers that highlight our progressively deeper understanding. Complication is often necessary, but it is progressive just in case it paves the way for greater simplicity.

The unification and simplicity is, thus, a leading indicator of scientific insight. Within linguistics it has a second function. It allows one to start addressing the issue of how FL might have evolved.  Here’s Chomsky:

In the analysis of the Basic Property, we are bound to seek the simplest computational procedure consistent with the data of language. Simplicity is implicit in the basic goals of scientific inquiry. It has long been recognized that only simple theories can attain a rich explanatory depth. “Nature never doth that by many things, which may be done by a few,” Galileo remarked, and this maxim has guided the sciences since their modern origins. It is the task of the scientist to demonstrate this, from the motion of the planets, to an eagle’s flight, to the inner workings of a cell, to the growth of language in the mind of a child. Linguistics seeks the simplest theory for an additional reason: it must face the problem of evolvability. Not a great deal is known about the evolution of modern humans. The few facts that are well established, and others that have recently been coming to light, are rather suggestive. They conform to the conclusion that the language faculty is very simple; it may, perhaps, even be computationally optimal, precisely what is suggested on methodological grounds.

Unless FL is simpler than we have considered it to be up till now (e.g. far simpler than say GBish models make it out to be) then there is little chance that we will be able to explain its etiology. So there are both general methodological grounds for wanting simple theories of FL and linguistic internal reasons for hoping that much of the apparent complexity of FL is just apparent.

Chomsky’s piece proceeds by rehearsing in short form the basic minimalist trope concerning evolvability. First, that we know little about it and that we will likely not know very much about it ever. Second, that FL is a true species property as the Cartesians surmised. Third, that FL has not evolved much since humans separated. Fourth, that FL is a pretty recent biological innovation. The third and fourth points are taken to imply that the Basic Property aspect of FL must be pretty simple in the sense that what we see today pretty well reflects the original evo innovation and so its properties are physically simple in that they have not been shaped by the forces of selection. In other words, what we see in BP is pretty much undistorted by the shaping effects of evolution and so largely reflect the physical constraints that allowed it to emerge.

All of this is by now pretty standard stuff, but Chomsky tells it well here. He goes on to do what any such story requires. He tries to illustrate how a simple system of the kind he envisions will have those features that GG has discovered to be characteristic of FL (e.g. structure dependence, unboundedly many discrete structures capable of supporting semantic interpretation etc.). This second step is what makes MP really interesting. We have a pretty good idea what kinds of things FL concerns itself with. That’s what 60 years of GG research has provided. MP’s goal is to show how to derive these properties from simpler starting points, the simpler the better. The target of explanation (the explanadum) are the “laws” of GB. MP theories are interesting to the degree that they can derive these “laws” from simpler more principled starting points. And, that, Chomsky argues, is what what makes Merge based accounts interesting, they derive features that we have every reason to believe characterize FL.[1]

Two other papers in the issue address these minimalist themes. The first is a review of the recent Berwick & Chomsky (B&C) book Why only us. The second is a review of a book on the origins of symbolic artifacts. Cederic Boeckx (CB) reviews B&C. Ian Tatersall (IT) reviews the second. The reviews are in interesting conflict.

The Boeckx review is quite negative, the heart of the criticism being that asking ‘why only humans have language’ is the wrong question. What makes it wrong? Well, frankly, I am not sure. But I think that the CB review thinks that asking it endorses a form of “exceptional nativism” (7) that fails to recognize “the mosaic character of language,” which, if I get the point, implies eschewing “descent with modification” models of evolution (the gold standard according to CB) in favor of “top-down, all-or-nothing” perspectives that reject comparative cognition models (or any animal models), dismiss cultural transmission as playing any role in explaining “linguistic complexity” and generally take a jaundiced view of any evolutionary accounts of language (7-8). I actually am skeptical regarding any of this.

Before addressing these points, however, it is interesting that IT appears to take the position that CB finds wrong-headed. He thinks that human symbolic capacities are biologically quite distinctive (indeed “unique”) and very much in need of some explanation. Moreover, in contrast to CB, IT thinks it pretty clear that this “symbolic activity” is of “rather recent origin” and that, “as far as can be told, it was only our lineage that achieved symbolic intelligence with all of its (unintended) consequences” (1). If we read “symbolic” here to mean “linguistic” (which I think is a fair reading), it appears that IT is asking for exactly the kind of inquiry that CB thinks misconceived.

That said, let’s return to CB’s worries. The review makes several worthwhile points. IMO, the two most useful are the observation that there is more to language evolution than the emergence of the Basic Property (i.e. Merge and discretely infinite hierarchically structured objects) and that there may be more time available for selection to work its magic than is presupposed.  Let’s consider these points in turn.

I think that many would be happy to agree that though BP is a distinctive property of human language it may not be the only distinctive linguistic property. CB is right to observe that if there are others (sometimes grouped together as FLW vs FLN) then these need to be biologically fixed and that, to date, MP has had little to say about these. One might go further; to date it is not clear that we have identified many properties of FLW at all. Are there any?

One plausible candidate involves those faculties recruited for externalization. It is reasonable to think that once FLN was fixed in the species, that linking its products to the AP interface required some (possibly extensive) distinctive biological retrofitting. Indeed, one might imagine that all of phonology is such a biological kludge and that human phonology has no close biological analogues outside of humans.[2] If this is so, then the question of how much time this retrofitting required and how fast the mechanisms of evolution (e.g. selection) operate is an important one. Indeed, if there was special retrofitting for FLW linguistic properties then these must have all taken place before the time that humans went their separate ways for precisely the reasons that Chomsky likes to (rightly) emphasize: not only can any human acquire the recursive properties of any G, s/he can also acquire the FLW properties of any G (e.g. any phonology, morphology, metrical system etc.).[3] If acquiring any of these requires a special distinctive biology, then this must have been fixed before we went our separate ways or we would expect, contrary to apparent fact, that e.g. some “accents” would be inaccessible to some kids. CB is quite right that it behooves us to start identifying distinctive linguistic properties beyond the Basic Property and asking how they might have become fixed. And CB is also right that this is a domain in which comparative cognition/biology would be very useful (and has already been started (see note 2). It is less clear that any of this applies to explaining the evolution of the Basic Property itself.

If this is right, it is hard for me to understand CB’s criticism of B&C’s identification of hierarchical recursion as a very central distinctive feature of FL and asking how it could have emerged.  CB seems to accept this point at times (“such a property unquestionably exists” (3)) but thinks that B&C are too obsessed with it. But this seems to me an odd criticism. Why? Because B&C’s way into the ling-evo issues is exactly the right way to study the evolution of any trait: First identify the trait of interest. Second, explain how it could have emerged.  B&C identify the trait (viz. hierarchical recursion) and explain that it arose via the one time (non-gradual) emergence of a recursive operation like Merge. The problem with lots of evo of lang work is that it fails to take the first step of identifying the trait at issue. But absent this any further evolutionary speculation is idle. If one concedes that a basic feature of FL is the Basic Property, then obsessing about how it could have emerged is exactly the right way to proceed.

Furthermore, and here I think that CB’s discussion is off the mark, it seems pretty clear that this property is not going to be all that amenable to any thing but a “top-down, all-or-nothing” account. What I mean is that recursion is not something that takes place in steps, a point that Dawkins made succinctly in support of Chomsky’s proposal (see here). As he notes, there is no such thing as “half recursion” and so there will be no very interesting “descent with modification” account of this property. Something special happened in humans. Among other things this led to hierarchical recursion. And this thing, whatever it was, likely came in one fell swoop. This might not be all there is to say about language, but this is one big thing about it and I don’t see why CB is resistant to this point. Or, put another way, even if CB is right about many other features of language being distinctive and amenable to more conventional evo analysis, it does not gainsay the fact that the Basic Property is not one of these.

There is actually a more exorbitant possibility that perhaps CB is reacting to. As the review notes (7): “Language is special, but not all that special; all creatures have special abilities.” I don’t want to over-read this, but one way of taking it is that different “abilities” supervene on common capacities. This amounts to a warning not to confuse apparent expressions of capacities for fundamental differences in capacities. This is a version of the standard continuity thesis (that Lenneberg, among others, argued is very misleading (i.e. false) wrt language). On this view, there is nothing much different in the capacities of the “language ready” brain from the “language capable” brain. They are the same thing. In effect, we need add nothing to an ape brain to get ours, though some reorganization might be required (i.e no new circuits). I personally don’t think this is so. Why? For the traditional reasons that Chomsky and IT note, namely that nothing else looks like it does language like we do, even remotely. And though I doubt that hierarchical recursion is the whole story (and have even suggested that something other than Merge is the secret sauce that got things going), I do think that it is a big part of it and that downplaying its distinctiveness is not useful.

Let me put this another way. All can agree that evolution involves descent with modification. The question is how big a role to attribute to descent and how much to modification (as well as how much modification is permitted). The MP idea can be seen as saying that much of FL is there before Merge got added. Merge is the “modification” all else the “descent.” There will fe features of FL continuous with what came before and some not continuous. No mystery about the outline of such an analysis, though the details can be very hard to develop. At any rate, it is hard for me to see what would go wrong if one assumed that Merge (like the third color neuron involved in trichromatic vision (thx Bill for this)) is a novel circuit and that FL does what it does by combining the powers of this new operation with those cognitive/computational powers inherited from our ancestors. That would be descent with modification. And, so far as I can tell, that is what a standard MP story like that in B&C aims to deliver. Why CB doesn’t like (or doesn’t appear to like) this kind of story escapes me.

Observe that how one falls on the distinctiveness of BC issue relates to what one thinks of the short time span observation (i.e. language is of recent vintage so there is little time for natural selection or descent with modification to work its magic). The view Chomsky (and Berwick and Dawkins and Tatersall) favor is that there is something qualitatively different between language capable brains and ones that are not. This does not mean that they don’t also greatly overlap. It just means that they are not capacity congruent. But if there is a qualitative difference (e.g. a novel kind of circuit) then the emphasis will be on the modifications, not the descent in accounting for the distinctiveness. B&C is happy enough with the idea that FL properties are largely shared with our ancestors. But there is something different, and that difference is a big deal. And we have a pretty good idea about (some of) the fine structure of that difference and that is what Minimalist linguistics should aim to explain.[4] Indeed, I have argued and would continue to argue that the name of the Minimalist game is to explain these very properties in a simple way. But I’ve said that already here, so I won’t belabor the point (though I encourage you to do so).

A few more random remarks and I am done. The IT piece provides a quick overview of how distinctive human symbolic (linguistic?) capacities are. In IT’s view, very. In IT’s view, the difference also emerged very recently, and understanding that is critical to understanding modern humans. And he is not alone. The reviewee Genevieve von Petziger appears to take a similar view, dating the start of the modern human mind to about 80kya (2). All this fits in with the dates that Chomsky generally assumes. It is nice to see that (some) people expert in this area find these datings and the idea that the capacity of interest is unique to us credible. Of course, to the degree that this dating is credible and to the degree that this is not a long time for evolution to exercise its powers the harder the evolutionary problem becomes. And, of course, that’s what makes the problem interesting. At any rate, what the IT review makes clear is that the way Chomsky has framed the problem is not without reasonable expert support. Whether this view is correct, is, of course, an empirical matter (and hence beyond my domain to competently judge).

Ok, let me mention two more intellectual confections of interest and we are done. I will be short.

The first is a review of Wolfe’s book by David Lobina and Mark Brenchley. It is really good and I cannot recommend it highly enough. I urge you in particular to read the discussion on recursion as self-reference vs self-embedding and the very enlightening discussion of how Post’s original formalism (might have) led to some confusion on these issues. I particularly liked the discussion of how Merge de-confuses them, in effect by dumping the string based conception of recursion that Post’s formalism used (and which invited a view of recursion as self-embedding) and implementing the recursive idea more cleanly in a Merge like system in which linguistic structures are directly embedded in one another without transiting through strings at all. This cleanly distinguishes the (misleading) idea that the recursion lies with embedding clauses within clauses from the more fundamental idea that recursion requires some kind of inductive self-reference. Like I said, the discussion is terrific and very useful.

And now for desert: read David Adger’s fun review of Arrival. I confess that I did not really like the movie that much, but after reading David’s review, I intend to re-see it with a more open mind.

That’s it. Take a look at the issue of Inference. It’s nice to see serious linguistic issues intelligently discussed in a non-specialist’s venue. It can be done and done well. We need more of it.




[1] Chomsky also mentions that how lexical items have very distinctive properties and that we understand very little about them. This ahs become a standard trope in his essays, and a welcome one. It seems that lexical items are unlike animal signs in that the latter are really “referential” in ways that the former are not. The how and whys behind this, however, is completely opaque.
[2] There has been quite a lot of interesting comparative work done, most prominently by Berwick, on relating human phonology with bird song. See here and here for some discussion and links.
[3] There is another possibility: once FLN is in place there is only one way to retrofit all the components of FLW. If so, then there is no selection going on here and so the fact that all those endowed with FLNs share common FLWs would not require a common ancestor for the FLWs. Though I know nothing about these things, this option strikes me as far-fetched. If it is, then the logic that Chomsky has deployed for arguing that FLN was in place before humans went their separate ways would hold for FLW as well.
[4] CB makes a claim that is often mooted in discussions about biology. It is Dobzhansky’s dictum that nothing in biology makes sense except in the light of evolution. I think that this is overstated. Lots of biology “makes sense” without worrying about origin. We can understand how hearts work or eyes see or vocal tracts produce sounds without knowing anything at all about how they emerged. This is not to diss the inquiry: we all want to know how things came to be what they are. But the idea that natural selection is the only thing that makes sense of what we see is often overblown, especially so when Dobzhansky quotes are marshaled. For some interesting discussion of this see this.

Friday, January 18, 2013

Effects, Phenomena and Unification


In the previous post, I mentioned that there is a general consensus that UG has roughly the features described in GB. In the comments, Alex, quotes Cederic Boeckx as follows and asks if Cederic is “a climate change denier.”

I think that minimalist guidelines suggest an architecture of grammar that is more plausible biologically speaking that a fully specified, highly specific UG – especially considering the very little time nature had to evolve this remarkable ability that defines our species. If syntax is at the heart of what had to evolve de novo, syntactic parameters would have to have been part of this very late evolutionary addition. Although I confess that our intuitions pertaining to what could have evolved very rapidly are not as robust as one would like, I think that Darwin’s Problem (the logical problem of language evolution) becomes very hard to approach if a GB-style architecture is assumed.

The answer is no, he is not (but thanks for asking). I’ll explain why but this will involve rehearsing material I’ve touched upon elsewhere so if you feel you already know the answer please feel free to go off and do something more worthwhile.

My friends in physics (remember, I am a card carrying hyper-envier) make a distinction between effective and fundamental theories.  Effective theories are those that are phenomenologically pretty accurate. They are also the explananda for fundamental theories.  Using this terminology, GB is an effective theory, and minimalism aspires to develop a fundamental theory to explain GB “phenomena.” Now, ‘phenomena’ is a technical term and I am using it in the sense articulated in Bogen and Woodward (here). Phenomena are well-grounded significant generalizations that form the real data for theoretical explanation. Phenomena are often also referred to as ‘effects.’ Examples in physics include the Gas Laws, the Bernoulli effect, black body radiation, Doppler effects, the photoelectric effect etc.  In linguistics these include island effects, principle A, B and C effects, weak and strong crossover effects, the PRO theorem, Superiority effects etc. GB theory can be seen as a fairly elaborate compendium of these. Thus, the various modules within GB elaborate a series of well-massaged generalizations that are largely accurate phenomenological descriptions of UG. I have at times termed these ‘Laws of Grammar,’ (said plangently you can sound serious, grown-up and self-important) to suggest that those with minimalist aspirations should take these as targets of explanation.  Thus, in the requisite sense, GB (and its cousins described in the last post) can serve as an effective theory, one whose generalizations a minimalist account, a fundamental theory, should aim to explain. 

I hope it is clear how this all relates to the Cedric quote above, but if not here’s the relevance.  Cedric rightly observes that if one is interested in evolutionary accounts then GB cannot be the fundamental theory of linguistic competence.  It’s jus appears as too complex, all that internal modularity (case and theta and control and movement and phrase structure), all those different kinds of locality conditions (binding domains and subjacency/phase and minimality and phrasal domains of a head and government) all those different primitives (case assigners, case receivers, theta markers, arguments, anaphors, bound pronouns, r-expressions, antecedents etc., etc., etc.).  Add to this that this thing popped out in such a short time and there really seems no hope for a semi-reasonable (even just-so) story.  So, GB cannot be fundamental.  BTW, I am pretty sure that I have interpreted Cedric correctly here for we have discussed this a lot over the last five to ten years on a pretty regular basis.

Given the distinction of GB as effective theory and MP as aiming to develop a fundamental theory, how should a thoroughly modern minimalist proceed? Well, as I mentioned before (here) one model is Chomsky’s unification of Ross’s islands via subjacency.  What Chomsky did was (i) treat Ross’s descriptions as effective and (ii) propose how to derive these on more empirically, theoretically and computationally more natural grounds. Go back and carefully read ‘On Wh-Movement’ and you’ll see that how these various strands combine in his (to my taste buds) rather beautiful account. Taking this as a model, a minimalist theory should aspire to the same kind of unification. However, this time it will be a lot harder. For two main reasons.

First, what MP aspires to unify have been thought to be fundamentally different from “the earliest days of generative grammar” (two points and a bonus questions to anyone who identifies the source of this quote). Unifying movement, binding and control goes against the distinction between movement and construal that has been a fundamental part of every generative approach to grammar since Aspects (and before, actually), as has been the distinction between phrase structure and movement. However, much minimalist work over the last 20 years can be seen as chipping away at the differences. Chomsky’s 1993 unification of case as a species of movement or Probe-Goal licensing (PGL), the assimilation of control to a species of movement (moi) or PGL (Landau), reflexive licensing as a species of movement (Idsardi and Lidz, moi) or PGL (Reuland), the collapsing of phrase structure and movement as species of E/I merge, the reduction of Superiority effects to movement via minimality. All of these are steps in reducing the internal modularity of GB and erasing the distinctions between the various kinds of relationships described so well in GB. This unification, if it can be pulled off (and showing that it might be has been, IMO, the distinctive contributions of MP), would do for GB what Chomsky did for islands and the resultant theory would have a decent claim to being fundamental.

The second hurdle will be articulating some notion of computational complexity that makes sense. In ‘On Wh-Movement,’ Chomsky tried to suggest some computational advantages of certain kinds of locality considerations.  Whatever, his success, the problem of finding reasonable third factor features with implications for linguistic coding is far more daunting, as I’ve discussed in other posts. The right notion, I have suggested elsewhere, will reflect the actual design features of the systems that FL interact with and use it. Sadly, we know relatively little about interface properties (especially CI) and we know relatively little about how FL would fit in with other cognitive modules. We know a bit more about the systems that use FL and there have been some non-trivial results concerning what kinds of considerations matter. As I have discussed this in other posts, I will not burden you with a rehash (see here and here). Consequently, whatever is proposed is very speculative, though speculation is to be encouraged for the problem is interesting and theoretically significant.  This said, it will be very hard and we should appreciate that.

So, is Cedric a denier? Nope. He accepts the “laws of grammar” as articulated in GB as more or less phenomenologically correct. Is his strategy rational? Yup. The aim should be to unify these diverse laws in terms of more fundamental constructs and principles. Are people who quote Cedric to “épater les Norberts” doing the same thing? Not if they are UG deniers and not if their work does not aim to explain the phenomena/effects that GB describes. These individuals are akin to climate change deniers for their work has all the virtues of any research that abstracts away from the central facts of the matter.