Comments

Monday, January 27, 2014

Being Edgy

For light entertainment, I have just read answers to the Edge question of the year: “What scientific idea is ready for retirement?” Edge.org (here) is the fashionable online “salon” (think French Belle Époque/Early 20th century) where the illuminati, literati, cognoscenti and scientific elite take on the big issues of the day impresarioed by John Brockman, the academic world’s favorite Rumpelstiltskin; a spinner of dry academic research into popular science gold (and by ‘gold’ I mean $$$$). At any rate, reading the page-lengthish comments has been quite entertaining and I recommend the pieces to you as a way to unwind after a hard day toiling over admission and job files. The answers weigh in at 213 pages of print. Here are a few papers that got me going.

Not surprisingly, two of those that raised my blood pressure were written about language. One is by Benjamin Bergen (BB) (20-1). He is a cog sci prof at UCSD and his proposal for an idea worth retiring is “Universal Grammar.” When I first read this I was really pissed. But I confess that after reading what he took UG to be, I could understand why BB wants it retired.

BB understand UG as claiming two things: (i) that there are “core commonalities across languages” and (ii) that such exist as a matter of “genetic endowment.”  He reports that “field linguists” have discovered that “are much more diverse than originally thought” (who this ‘we’ are is a bit mystifying. Not even a rabid Chomskyan like me has ever doubted that the surface diversity among languages is rather extensive). In particular, not all languages have “nouns and verbs” and not all “embed propositions in others.” In other words, it seems that field linguists (you can see the long shadow of Mr D. Everett here, more anon) have been busy demonstrating something that has been common knowledge for a long long time (and still the common view): that the surface linguistic forms we find across natural language are very diverse and that this diversity of surface forms indicates that there are few surface manifest universals out there. Oddly, BB is happy to concede that “perhaps the most general computational principles are part of our innate language-specific human endowment” but “this won’t reveal much about how language develops in children.”

There is lots to quibble about here: (i) most importantly that this Greenbergian gloss on “universal grammar” is not how people like me (and more importantly, Chomsky) understand UG, (ii) that BB seems not to have read any of the work on Greenberg style Universals common in the current literature (think Cinque hierarchy), (iii) that if UG is correct then this changes the learning problem, (iv) that “inferring the meaning of words” exploits emerging syntactic knowledge that itself piggy backs on the innate computational principles of UG (e.g. Gleitman), etc.  However, putting all of this to one side, I have nothing against giving up BB’s Greenbergian conception of Universal Grammar.

Indeed, I would go further. We should also give up the idea of language as a proper object of inquiry because it is almost certainly not a natural kind. Generative Linguists of the Chomsky stripe should make clear that strictly speaking there is no such thing as English, French, Inuit, etc. and so it is not surprising that these things have no common properties.  BB’s objections are with claims that my team doesn’t make; I (we) don’t suppose that languages universally have certain properties, only that I-languages do. And these properties involve precisely those features that BB seems happy to concede are species specific and biologically given. For my money, I am happy to throw BB’s notion of Universals on the scientific trash heap and would add ‘language’ to the pyre.

As mentioned, BB is clearly channeling Daniel L. Everett (DE) in his comment. DE speaks for himself here (203-205). He wants to dump the idea that “human behavior is guided by highly specific innate knowledge.”[1] You might think from this opening line that the target is once again going to be domain specific principles of UG. But you would be wrong! It seems that what’s got DE riled this time is the very idea of innate characteristics. DE finds any idea of a non-environmental input to development or learning to be illicit. So not only does DE appear to object to domain specific natively given mechanisms, he seems to object to any mental or neural structure at all that is not the result of environmental input. Wow!

I confess, that I found it impossible to make sense of any of this. I can think of no model of development or of learning/acquisition that does not rely on some given biases, however modest, that are required to explain the developmental trajectory.  The argument is not whether such biases are required, but what they look like; hence the discussion concerning domain specificity. But that’s not DE’s position. He wants to dump the distinction between environment and “innate predispositions or instincts” because “we currently have no way of distinguishing” them.  Really? No way? Not even in a particular domain of inquiry?

What are the arguments DE musters? There are three. All piss poor.

First, DE notes that environmental influence is pervasive: “there is never a period in the development individual…when they are not being affected by their environment.” Hence, DE concludes, we cannot currently know what is environmental from what is innately given. Hmm. The problem is complicated, hence unsolvable? The claim that development arises as the joint contribution of input + an initial state and that this means we need to know something about the initial state does not imply that it is easy to decipher what the architecture of the initial state(s) is. DE and I disagree about what the initial state for grammar development is, my UG being very different from his. But no innate principles/biases then no learning/development. So, if you want to understand the latter you need to truck in the former, no matter how hard it is to tease them apart.[2]

Second, it seems that one cannot give an adequate “definition” of ‘innate.’ Every definition has “been shown to be inadequate.” Of course, every definition of everything has been shown to be inadequate. There are no interesting definitions of anything, including ‘bachelor.’ However, there are proposals that are serviceable in different domains and that inquiry aims to refine. That’s what science does. For what I do in syntax, ‘innate’ denotes the given biases/structures required to map environmentally provided PLD into a G.  I have no idea whether these given biases are coded in the genes, are epigenetic, or are handed over to each child by his/her guardian angel. Not that I am not denying that these other questions are interesting and worth investigating. However, for what I do, this is what I mean by innate. Indeed, I suspect that this is what it more or less always means: what needs to be given so that adventitious input can be generalized in the attested ways. Data do not generalize themselves. The principles of generalization must come from somewhere. We call the place they come from the native or instinctual. And though it is an interesting question to figure out how such native information is delivered to the infant, delivered somehow it must be, for without it development/learning/acquisition is impossible.

Third, DE asserts that one cannot propose that some character is innate without “some evolutionary account of how it might have gotten there.” If this is the case, then most of biology and physics might as well stop right now. This view is just nuts! It’s a version of the old show stopper: you don’t know anything until you know everything, which, if true, means that we might as well stop doing anything at all. Let’s for the sake of argument assume that knowing the evolutionary history of a trait is necessary for understanding how a thing works (btw, I don’t believe this: we can know a lot about how something (e.g. wings, bee dances) works without knowing much about how it developed). Even were this the case, it’s simply false that one cannot know about the mechanics of a system without knowing anything at all about how it arose. We know a whole lot about gravity and still don’t know how it “arose.” But, this position is not only false in practice it is methodologically sterile as it endorses the all or nothing view of inquiry, and this, I suspect is why DE proposes it. What DE really wants (surprise, surprise) is to end Chomsky style work in linguistics. He reaches for any argument to stop it. The fact that what he says verges on the methodologically incoherent matters little. This is war, and as in love, for DE, it seems, all things are fair. Read this piece and weep.

As antidote to DE (and Gopnik) it is worth reading Oliver Scott Curry’s contribution (38-9) on Associationism.[3] He writes that associationism is “hollow- a misleading redescription of the very phenomenon that is in need of explanation.” Right on! Curry makes the obvious, yet correct, point that absent a given mechanism that allows one to divide input into the relevant and irrelevant there is no way to use input. Using input requires “prior theory.” A modest point, but given how hard it is to wean people from their empiricist predilections, always a useful on to make.

There are other entries that will infuriate, but I will leave their debunking as an exercise for the reader. For the interested, take a look at N.J. Enfield’s contribution (47-8) heroically defending the view that there is more to “language” than competence.

I should add that the immediately linguistically relevant articles are a small subset of the Edge pieces. Maybe it’s a sign that what current linguists do is not highly prized that there is not a single piece in he lot by anyone I would consider doing serious linguistics. It’s a clear sign that what we do is no longer considered relevant to wider intellectual concerns, at least buy the “Edgy.” This was not always so. Chomskyan linguistics, after all, was once the leading edge (sic!) of the “cognitive revolution.” We really need to do something about this. Maybe I will post on this later. Any suggestions for raising our profile would be welcome.

This said, there are lots of interesting papers in the collection: on the use of stats (75-77, and 176-7), minds and brains (208-9), mysterianism (7-8), big data (24-5 and 176-7), replication (189-90), the scientific method (147-8), science funding (118-9), science vs technology (211-12), unification (88-90), simplicity (168-9, 180-1), elegance (93-4), falsifiability (202-3), the current (very animated and heated) fight over current high theory in physics (every article by the many physicists), among others. The entries are short, and often provocative and entertaining. So, if you are looking for bathroom reading, I cannot recommend this highly enough.







[1] Note that DE’s explanandum is “behavior.” But, this is the wrong target for explanation. Steve Pinker’s very nice piece (190-192) puts it very well so let me quote:
More than half a century after the cognitive revolution, people still ask whether a behavior is genetically of environmentally determined. Yet neither genes nor the environment can control the muscles directly. The cause of behavior is the brain. While it is sensible to ask how emotions, motives or learning mechanisms have been influenced by the genes, it makes no sense to ask this of behavior itself.
[2] Allison Gopnik (172-3) has a similarly confusing Edge comment. She too seems to think that the fact that there is a lot of interaction between environmental input and intial state endowments implies that the whole notion of an initial state is misconceived. IMO, her “argument” is little better than DE’s.
[3] See Andy Clark’s piece on I/O models (147) as well.

1 comment:

  1. If you're interested I've started a website, eating-the-elephant.com, which breaks up long form web content into bite sized chunks which are then delivered to your email each day.

    One of my inspirations was the Edge QOY because it's all great stuff, but way too long to read in one sitting so I needed a way for it to queue up for me automatically a little bit at a time.

    I'm only doing questions more than 10 years old so I don't step on anybodies toes, but if you're intersted in catching up on some of the old questions it's a great resource.

    http://eating-the-elephant.com/edge/

    ReplyDelete