Sunday, September 6, 2015

Two shortish perusables

Here are a couple of things that crossed my desk this week that you might find interesting.

The first (here) is from Christoph and lifted from the comments section of this. The piece is a short comment by Hilary Putnam (who, btw, was the head of my thesis committee) on the “innateness hypothesis.” He reiterates why he rejects the hypothesis. It comes down to the mis-assertion that Chomsky’s version of FL/UG requires that it “provide for all possible meanings, which Putnam takes to be ridiculous because Chomsky specifies no “mechanism [that] could have endowed the brains of primitive men and women with such ‘particular meanings’ as “quantum potential” and “macroeconomic” or with terms by means of which they could be defined, if indeed, there are more elementary terms in which this could be done.” The absence of a proposed mechanism makes it clear to Putnam that the ‘innateness hypothesis’ must be wrong. That’s the argument. Unfortunately, it’s really very bad.

Philosophers love this kind of argument from incredulity. Putnam’s main current contribution to the “debate” is to point to concepts like macroeconomic rather than carburetor as the ones that are particularly problematic. But the claim has the familiar surely-you-don’t-mean-to-say form so beloved of philosophers. The unfortunate thing is that Putnam, I believe, has simply misunderstood both Chomsky’s and, more relevantly, Fodor’s positions on these matters. Let me explain.

First a small terminological point. As Chomsky has often remarked, it is unclear what the ‘innateness hypothesis’ is supposed to be. It cannot be that anyone doubts that minds come equipped with innate structure. Everyone assumes that the mind/brain has given structures and operations that guide/bias learning/acquisition. Truly blank slates stay blank. The question has never been whether minds/brains have innate structure but what is innate, what kinds of generalizations are minds/brains predisposed to make so that when confronted with input they generalize beyond it? Everyone thinks that there is something. The question is what. Chomsky’s simple point is and always has been that there is every reason to think that in the domain of language, the mind/brain has methods of generalization specific to linguistic forms and that any kind of simple associationism built around mere sensory input has not, will not and cannot work. If there is an innateness hypothesis worth discussing, it is the specific suggestion that language competence relies on language specific mental capacities and cannot be reduced entirely to other cognitive capacities. And this requires discussing details, something that your average friendly famous philosopher of language has rarely (never?) done.  

Second, if this is what Chomsky intended, then it’s clear that Putnam’s observations don’t bear on it. Specifically, so far as I know, Chomsky has had very little to say about where concepts or lexical meanings come from. In fact, so far as I know, nobody (including Putnam) has any idea of how concepts arise in minds. Chomsky has repeatedly said as much, pointing to the human capacity for lexical acquisition as being a mystery.  So, whatever Putnam is saying here, it bears less on Chomsky’s views (which have largely been confined to claims about syntactic structure (catchy phrase huh? Maybe a good book title?). Indeed, since the very earliest days of GG, Chomsky has been very very very circumspect in his discussion of lexical meaning and how much we understand about it. My reading of his discussions of these matters is that Chomsky’s main conclusion is a negative one; that whatever lexical meaning is, it’s not just a matter of referential dependency (see here for some discussion).

But, third, maybe the target of Putnam’s comments is not Chomsky but Fodor. Fodor has made an argument that family resembles the one that Putnam summarily dismisses. But, even as regards Fodor, I think that Putnam mistakes the claims.  Very briefly, what Fodor argued is that the only theories of induction we have are “selection” theories. What I mean is that they understand learning as the inductive fixation of belief given a space of possible hypotheses. Induction works by moving one around this space of possibilities and, if there is enough data of the right kind, settling in one part of the space or another (see here for discussion). Thus, what we have are theories of belief fixation given a space of possible concepts/beliefs. Given that this is what we have, there is a sense in which you cannot possibly acquire anything that you don’t already have the wherewithal to conceptually represent. That was Fodor’s first point.

He coupled it with a second series of arguments that denied that most lexical meanings were decompositional (something that the Putnam quote above seems to agree with). So, if most word meanings cannot decompose and fixation requires representation of the concepts that are fixed then there is a sense in which the meaning of ‘carburetor’ is there in the mind from the get-go.  This is Fodor’s argument.

Now Putnam clearly dislikes the conclusion. Say he is right, that it is a reductio whose premise must be false. What does that tell us. Well it implies that there must be some other theory of learning besides the inductive ones that we all know and love. Recall that Fodor’s argument is that inductive learning theories imply all the fixable concepts are (in one sense) innate. So if you don’t like this conclusion you must show that either inductive learning theories do not presuppose hypothesis spaces (or analogues thereof) contrary to what Fodor noted, or that there are other theories of learning that are non-inductive that explain how we acquire words and meanings. In other words, if you don’t like the conclusion then you either need to show where Fodor’s description of induction fails or suggest that induction is not the only way to learn and to provide an outline of the other kinds. Putnam does neither.

Curiously, I think that Fodor might agree with the second option. In the Modularity of Mind, if I recall correctly, Fodor suggested that only modular cognitive systems are amenable to current investigation. One way of reading this is that only informationally restricted modular domains are ones where the hypothesis space/inductive procedure story can be made to work. Moreover, Fodor is on record opposing the conception of the mind as massively modular (i.e. made up of endless numbers of small modules) and thinks that something else (he knows not what) is going on in central system cognition. It is consistent with Fodor’s views that lexical acquisition is not inductive and so there is some other way that concepts are acquired. But, and this is key, he does not have the remotest inkling as to what this other process might be and, so far as I can tell, neither does Putnam.

Putnam also throws in some comments on Chomsky and Fodor’s views about evolution but it is entirely unclear what any of the (misnamed) innateness controversy has to do with evolution. As regards lexical acquisition, for example, Fodor is not against our conceptual repertoire changing over time, he is against it changing by induction/learning over time.

Sorry to have gone on so long. Putnam’s remarks are not new, as he himself points out. It seems, however, that his current views miss the mark as much today as they did when first advanced them. The more things change…

Here is a second paper on referentially ready minds. The paper is in Nature and two of the authors should be well known to linguists. It makes, to my mind, the modest point that kids are ready to take language as an indicator of referential intent when accompanied by other behavior (e.g. eye gaze). It seems that even very young kinds show indications of thinking that language use goes hand in hand with referential intent.

One question I had is what I am supposed to take away form this? What does it tell us about acquisition? Is the suggestion that establishing referential value is an important factor in language acquisition? If so, how big a factor? Bigger than distributional analysis? Is being reference ready a critical pre-condition for language acquisition? Is the supposition that reference is what meaning consists in (if so, see Chomsky’s relevant remarks on this topic linked to above). I am not sure. So, let me ask you: what’s the take home message here and why is what the paper argued for important? BTW, this is a sincere question: what’s the overall take home message? That language can be used referentially and that kids come natively equipped to believe this? Or is there something more going on here?


  1. Thanks for the link to the Nature paper, which I will make use of in my Language Acquisition class.

    My reading of the paper is somewhat different. I think the (external) referential part of the picture is secondary. The main message, if I understand it correctly, is that social cues alone are not sufficient for language acquisition, and it must be coupled with linguistic input---hence the forward vs. backward speech comparison, which is especially pertinent because the Mehler group has done the defining work on the identification of speech via prosodic tracking, precisely by contrasting forward vs. backward speech.

    There has been an overarching theme in recent child language literature to the effect that if the baby got the social/communicative part right, language acquisition would follow. (Not difficult to come up with counterexamples for specific studies and overblown conclusions, although presumably these type of cues are presumably useful.) This paper shows that that's not enough, and language/speech is special after all, even if we look at just a subcomponent of word learning, i.e., finding potential referents in the external world.

    1. Thx. A question: the results seemed to track latency rather than capacity. So one got the referntaility in all cases, just faster with the forward language cue. This suggests that language adds something, but not that it enables what is otherwise absent. Does this matter?

  2. Just to note that the second paper appeared not in Nature but in Scientific Reports, the new open access author-pays mega-journal modelled on PLOS ONE that is owned by Nature publishing group. It's easy to be thrown off by the url.