Sunday, June 8, 2014

Lecture 1: Comments

Here are some comments on lecture 1 (here). I’ll try to comment on the others as I get through them sometime in the next couple of weeks.

The aim of the first lecture is to locate the Generative enterprise conceptually. Chomsky notes there have been two lasting themes since the inception: (i) that the central fact about Natural Language is that it involves the generation of an infinite array of hierarchically structured objects that link to systems of thought and systems of externalization; The aim of GG being to describe this I-language in detail. (ii) Language is a biological system, not a social construct. It should be studied the way other such biological systems are studied; the aim being to figure out the underlying structure of its phenotypical properties.

Given these two themes there are several obvious projects: (a) study the structure of I-langauge, (b) study how I-language is acquired (c) study how I-language emerged in the species.  None of this is new or exciting to readers of this blog, I would hope. What is fun to see is how Chomsky understands these projects in a wider philosophical and historical context.  Chomsky is very very good at giving a broad sweep history of earlier influences and differences that the modern perspective has adopted. He’s very good at comparing his views to those of these important precursors.  It’s actually amazing how many giant’s shoulders are trotted out for perching on (Darwin, Descartes, Newton, Leibniz, Galileo, Locke, Hume, von Humboldt, Russell, Turing, Church, Kleene) and how many views are dumped on (Dummett, Quine, Lewis, Tomasello, Construction Grammarians, Churchlands). 

I especially enjoyed Chomsky’s discussion of the history of the calculus and the early history of Chemistry. He makes a point that he has made before but is worth repeating in the current cultural climate. The point is that methodological pronouncements often lead us astray if the history of science is any indication. Newton’s calculus had real foundational problems, ones that Berkeley identified. It seems that parts of the system were based on equivocations, which vitiated many of the proofs. British mathematicians took Berkeley’s arguments very seriously with the result that they contributed almost nothing to the next steps in the development of the calculus. Continentals basically ignored these problems and made fundamental contributions to its development.  When were these problems resolved? At the end of the 19th century when solving them really were required for the problem with infinitesimals started impeding mathematical progress.  So are fuzzy equivocal concepts always bad and resolving them always good? Not if history of science is the guide. Sometimes the fuzziness should be tolerated for too much conceptual fussiness has its costs.

I cannot help but think that this has relevance for some issues that we have debated on this blog concerning the utility of formalization in current linguistics. There are some that find the concepts too fuzzy to be born. Others think them clear enough, while conceding problems that will be cleared up when the need arises. We all know who we are. What Chomsky notes is that history on these matters doesn’t always (or even usually) come down on the side of methodological hygiene. Or, really, being careful matters more at some times than at others and one needs to show that a “confusion” is impeding progress before one insists on stopping research that ignores it in its tracks.

I also loved Chomsky’s discussion of the history of the “reduction” of Chemistry to Physics. There was none. The prestige science, physics, never succeeded in explaining chemistry. Rather physics had to change radically before it had anything to say about chemistry, whose methods remained effectively unchanged.  Chomsky, notes that there is every reason to think that this is the same now in domains of relevance to linguists. Think the reduction of the mental to the neural. There is a presupposition that the soft mental sciences must adjust their findings to fit in with the hard brain sciences. The history of chemistry suggests, however, that this reading, even if correct, is tendentious.  Error can come anywhere and when there is a problem it is never obvious what theory requires adjustment.  Moreover, at least in the case of minds and brains, Chomsky notes that there are reasons for thinking that the brain people are barking up the wrong trees. He notes Gallistel&King’s work suggesting the neuro types have gotten hold of the wrong end of the stick. Readers of this blog will recognize that I could not agree more.

There are many other excellent riffs in this two hour segment. Chomsky does an excellent job of identifying intellectual precursors and outlining where he thinks they got things right and where wrong.  He arranges his discussion of different conceptions of language around Darwin, Descartes and von Humboldt, noting how each could be seen on focusing attention in I-language as the proper object of study. Darwin by understanding that language was a biological system instrumental in undergirding the distinctive nature of human (vs animal) cognition, Descartes by seeing the creativity of human language as a completely distinctive kind of natural phenomenon calling for a novel kind of scientific approach, von Humboldt by zeroing in on the recursive property of language as the thing that needed explaining. Chomsky also notes that their conceptions required some cleaning up to get to the ones that we now take as fundamental: that what is studyable is linguistic competence, not linguistic behavior, that language really is qualitatively different form what we see in other parts of the biological and physical world, and that there are some aspects of language (and cognition) that may forever stay shrouded in mystery.

Chomsky also sets the stage for his coming minimalist disquisition. He does this in two ways. First, he reviews the logic that drives the search for a few very simple principles that allow for the emergence of FL. He notes that the capacity for language is very recent, and has been stable since its emergence. This implies that whatever prompted its emergence is very simple and very few in number; hopefully a single addition. Second, he notes that the target of explanation is the recursive systems that produces an infinite array of hierarchical expressions that ling to systems of externalization and meaning, the latter being more basic (hour 1;14). An aside: one interesting point Chomsky makes given recent discussions in the comments section here, is his observation that we know next to nothing about the system of thought and its objects (hour 1;15). That would appear to make the role of Bare Output Conditions marginal in what follows, but that’s just a hunch so stay tuned.



There’s lots more: comments on the roles of reference as a semantic primitive (nope!), on how far we should expect to be able to understand the world and our theories of them (the first, not at all, the second up to a point), the importance of being puzzled, the hollowness of most current discussion on the evolution of language and much more. The lecture is a little like some of Dylan’s concert albums with the Band. The songs, though familiar, are all played slightly differently, so they are familiar without being boring and part of the fun is humming along. Sort of high class comfort food. Chomsky has a knack for making the big picture accessible in a way that nobody else can. This is a fun 2 hours. I hope that once he gets technical, it stays as scintillating. I suspect however that come lecture 2 it’ll be time to fasten one’s seatbelt and put all trays into an upright and locked position.

Addendum:

Chomsky makes one point that is relevant to some prior discussions on this blog. He points out that assuming that FL is a real biological object licenses using a very wide array of data to triangulate on its properties. Dat from Japanese bears on the structure of English Gs, and data from acquisition bears on the structure of English Gs and data from processing bears on the structure of English Gs and…In other words, once one sees our theories as theories OF FL then there is (at least in principle) no way of a priori delimiting what data will count as useful in probing its structure. This, Chomsky notes, contrasts with earlier structuralist conceptions wherein the aim was simply a way of effectively organizing a set of data or corpus. Here, anything beyond such data is irrelevant. The same holds if one considers the object of study instrumentally or Platonistically. Such views serve to a priori blinker investigation by dismissing empirical considerations that go  beyond the particular <s,m> pairs before one's nose. In other words, understanding theories to be about FL/UG realistically construed widens the scope of one's inquiries.

10 comments:

  1. I watched the videos already a few days ago, and more or less all in one session. This means that I do not really remember exactly what was discussed when.

    I am watching this as somebody who considers himself primarily a phonologist. It is actually not very clear what C's position is with respect to phonology. It seems to me a logical conclusion from what he says that φ can hardly be considered very interesting. But when I wrote to him about this last year, he denied that such is the case, without having the time to explain more. In any case, I do not really mind. I am interested in phonological problems, and if they do not fit in the FLN, I suppose I am studying something outside FLN, although methodologically I still believe that what I am studying is NOT something we share with other primates, but something cognitive, and mostly specifically human.

    In any case, I think Chomsky talks about his evolutionary argument in this first class: rather briefly so, but it is a tune we have heard before. It goes like this: evolutionarily, it is very unlikely that language started out with 'externalization', since the first individual to get the relevant mutation would not have profitted from it, in the absence of others. Therefore, the relevant mutation must have been profitable already for the individual: i.e. Merge, which would presumably have helped in the organization of thought. This gave the mutant an advantage which helped spread the relevant gene; externalization came only afterwards.

    Although this is a clever story, and I am willing to buy it, I see a problem with it: externalization would still have to happen, and the evolutionary puzzle still has to bug us: somebody would have had to start externalization, but that would have been of little help to that individual. So, the Merge story does not solve the original problem, it just shifts it one step away.

    All of this is related to the strange terminology which you guys (our syntactic colleagues) seem to have, and which Chomsky definitely also has: to completely equate 'language' with 'syntax'. Again, it does not bother me, I am happy to be called something else than a linguist if I want to work on phonological problems. But it is rather confusing at times. Furthermore, I think there is plenty of evidence that phonological structures are not *just* about externalization; that they play a role internal to the individual language user as well.

    I am curious to know what you think.

    ReplyDelete
    Replies
    1. Thx for asking. I think that Chomsky is right that there is something special about recursion, very special. I also think that this property is due to the syntax (I'm an interpretivist here rather than say, someone like Jackendoff that allows all "levels" to be generative). That said, I agree that there is something interesting and perhaps special about phonology as well. I have no basis for the following but let me say it anyhow: I think that the kinds of feature systems we see in externalization systems could be critical to allowing us to have the very large vocabularies we do. Think of Quine's old "museum myth" problem.

      Assume that all of our concepts were in fact innate. Acquisition amounts to nothing more than tagging them. This, it turns out is very hard to do unless you have some way of generating effectively an open number of tags on the fly. So, we need a pretty hefty combinatorics to solve the tagging problem even if we accept that the museum myth is true. Now, one thing that Chomsky notes in the first lecture is the second surprising fact about language: viz. that we have HUGE vocabularies. So there are two puzzling facts: huge vocab and recursion. I suspect that they are NOT related, but I don't really know. But it seems plausible to me that the kind of feature combinations that phono studies could be important in addressing this problem.

      BTW, Paul Pietroski has an argument for the evo time line that Chomsky supposes that employs a novel argument having to do with animals that vocalize. It turns out "fancy" vocalization is relatively rare but the animals that display it cut across large swaths of evo time (bats, dolphins, birds, humans do it). I have encouraged him to write something up and post it. If I succeed we will have a second argument for the late externalization thesis.

      Last point: if something like feature combinatorics are required to gain access to a vast array of tags for concepts then I think that it is not clear whether or not SOME aspects of phono pre-date the emergence of recursion. The question is WHAT gets externalized. Perhaps its the kind of feature combinatorics that supports concept tagging that can then be hooked up to an externalization system. So, it may be that 'in the head' phono arises early while hooking this up to articulators is what is relatively late. After all, recursion without access to lots of items to combine may not be that useful.

      Here's a thought experiment. You are going to Hungary (I assume you are not fluent in Hungarian) and I offer you mastery of vocab but no grammar or grammar and 100 vocab items. Which will you pick? I've asked dozens this question and all agree that they would take the vocab even sans G. Why? Because given 100k words you can really get along quite well. Genie demonstrated this. So, recursion without atoms is not that useful even conceptually. Question what do we need for words of the kind that we have?

      Delete
    2. Thank you. A similar idea about the function of phonology as providing cognitive 'addresses' for vocabulary items has been suggested by Jonathan Kaye in his 1990 book 'Phonology. A Cognitive Approach', it seems to me. It is an interesting idea, worth exploring; I agree.

      And indeed, it is completely unclear why we need all those 10s of 1000s of words that every person seems to know. It seems quite easy to show that also this peculiar property runs AGAINST communicative purposes – in the sense of getting a message across. One does not need all these differences between 'donate' and 'give' to begin with, one would say, in order to get one's message across.

      I guess I am neutral with respect to the question whether phonology is a separate generative system in a Jackendoff style or rather an interpretative system. I guess in any case, the externalization will always have used something which was already there; and that might have been more than just using your articulatory organs. But I think you are saying the same thing. I would be interested to see what Pietroski's argument is.

      Note, by the way, that the model you tentatively describe at the end (with 'in the head' phonology coming early and 'phonetic implementation' coming late) is very similar to the interpretation which Kiparsky's Lexical Phonology gave (gives) to the Y-model.

      Your Hungarian example might be slightly less convincing in the sense that obviously everybody would choose the vocabulary: recursion they already have, in their heads.

      Delete
    3. Also about the Hungarian example - not only do the non-Hungarians have recursion in their heads, but also the Hungarians.

      Delete
    4. Marc wrote: All of this is related to the strange terminology which you guys (our syntactic colleagues) seem to have, and which Chomsky definitely also has: to completely equate 'language' with 'syntax'.

      I don't think this is strange terminology at all, and I don't think it should offend or worry anybody. What Chomsky means is that I-language is, by definition, internal (to the speaker): its representations don't pick out aspects of the mind-external world, it's all internal computation. That's just what "syntax" means, in the traditional sense; and when Chomsky says semantics and phonology are really syntax, this is what he means. So we can continue to talk about phonology, semantics, and syntax, but if they're internal they're just different aspects of syntax. The only real question is whether any of these systems has a genuine semantics (in the Fregean sense), i.e. whether its symbols pick out real-world entities (physical objects, auditory signals, ...). Chomsky denies that this is true for either symbols involved in phonology nor for symbols involved in formal semantics, meaning they're technically syntax. So nothing strange here.

      Also: I am interested in phonological problems, and if they do not fit in the FLN, I suppose I am studying something outside FLN, although methodologically I still believe that what I am studying is NOT something we share with other primates, but something cognitive, and mostly specifically human.

      As I remember the HCF paper, they explicitly say that what's distinctive is recursive combination *and* interface mappings, which would include phonology (in the above sense). I don't think Chomsky intends to suggest that phonology as a mapping component is not species-specific (although many systems involved in real-time production may well be).

      Delete
    5. Perhaps a reason for the ridiculous number of words is a) people living off the land (especially without convenient cabins to raid for supplies) need a lot of vocab for the useful kinds of animals, plants, their products, and other natural features b) humans stumbled upon the trick of satisfying this need by a combinatorial system based on various kinds of primes (pace Fodor and PP, whose 'innate lexicon' story seems to me to just be a pretentious way of stating the opinion that lexical semantics is inherently boring) c) then the vocab expands to whatever extent it can, as dictated by circumstances. This extent can be large in cultural situations where there are centuries or millenia of written tradition to fossick around in, and kudos to be earned by showing that you can do it (Greeks really can dump words and even morphological forms all back from Homer into their compositions without changing the typeface, and sometimes do, tho you might get made fun of if you do it wrong).

      Delete
    6. Adding a bit more detail for explicitness, the system has phonological primes and combinatorics at one interface, and conceptual primes and combinatorics at the other.

      Delete
  2. Yes, yes. Ah thought experiments. That's why I alluded to Genie. She had very defective syntax, but a pretty good lexicon. This made her able to understand and make herself understood in most contexts. In fact it was only in "reversible" contexts or when there were critical non local dependencies at issue where she really had trouble. This could be taken as an indication of how far one can go without a productive G, I.e. On the strength of lexical items alone. Anyway that's where I was pointing.

    ReplyDelete
    Replies
    1. She did have defective syntax, but the question is in what sense? Again, my channeling of Chomsky here. She still presumably had the "language of thought" that Merge provides as part of the genetic endowment, right? So maybe the adult grammar that she acquired was highly defective, but could still make use of thought and planning in combination with some intent to communicate.

      The question is really: how well would a human minus recursion be? This has imperfectly been studied with ape sign communication, right? In one sense, these experiments (and all evidence aside from these experiments) seem to show that apes can communicate pretty well, but in another sense, you'd hardly expect one to order a latte at Starbucks. I have this sneaking suspicion in the back of my mind that our massive vocabulary is in some way connected with FL/recursion.

      Delete
    2. @William
      Sorry for the delay. Your point is reasonable. And I am no expert on Genie, only having read some of the early reports by Susan Curtiss, and this a while ago. But here is what I recall. Genie had real problems with hierarchical dependencies and locality, especially when the result of displacement resulted in non-canonical word order. So she had problems with reversible passives, Object relatives and object wh movement, scope of negations etc. In contrast she had a pretty good vocabulary and one that seems to grow at a relatively normal rate. The syntactic deficiencies seemed inured to training. If this is roughly accurate, then it suggests that what Genie was doing, and did very well, was finding inter-lexical dependencies that "made sense" without necessarily treating linguistic objects as structured entities. That, at least is one reading. If one assumes, as Chomsky seems to now (thought I may be over interpreting here) that the same operations are responsible for embedding as are responsible for displacement (viz. Merge), then the absence of apparent movement dependencies suggests an absence of Merge in Genie's repertoire. This is what I was suggesting.

      However, this is all very speculative. Genie is hardly the best testbed for these kinds of concerns. But, heh, this was a thought experiment. Can we conceive of a thinking entity that can develop kinds of interesting thoughts in the absence of syntax? I believe that this is possible. I always thought that my dog had a complex and interesting mental life in the absence of Merge. And I even once herd Chomsky speculate that the animal language literature bears a passing resemblance to the kinds of things that Genie seemed able to do, though she functioned at a much higher level given her obvious capacity to acquire vocab. So that was the idea.

      Delete