Comments

Showing posts with label Hilary Putnam. Show all posts
Showing posts with label Hilary Putnam. Show all posts

Sunday, August 7, 2016

Pullum on Putnam

Geoff Pullum has a recent piece in The Chronicle (here) in which he praises a deservedly famous man, Hilary Putnam. Putnam was an important 20-21st century philosopher who compiled what is arguably the best collection of essays ever in analytic philosophy. Pullum notes all of this and I cannot fault him for his judgment. However, he then takes one more step, he lauds Putnam as the “world’s most brilliant, insightful, and prescient philosopher of linguistics.” That Putnam was brilliant and insightful and (maybe) prescient is not something that I would (and did not (here)) contest. That any of this extended to his discussions of linguistics topics strikes me as either a sad commentary on the state of the philosophy of linguistics (this gets my vote) or hyperbole (it was a belated obit after all). At any rate, I want to make clear why I think that Putnam’s writings on these matters are best treated as object lessons rather than insights.  Happily, this coincides with my re-reading of Language and Mind.  Chomsky takes on some of Putnam’s more (sadly) influential criticisms of GG and (I am sure you will not be surprised to hear from me) demolishes them. The gist of Chomsky’s reply is that there is very little there there. He is right. This has not stopped analogous criticisms from repeatedly being advanced, but they have not gotten more convincing by repetition. Let me elaborate.

Putnam’s most directed critique of the Chomsky program in GG were his 1967 Synthese paper (“The ‘Innateness Hypothesis’ and Explanatory Models in Linguistics”) and a later companion piece “What is innate and why.” Chomsky considers Putnam’s arguments in detail in chapter 6 of the expanded edition of Language and Mind entitled “Linguistics and Philosophy.” Here is the play by play.

Chomsky’s critique has three parts:

1. Putnam’s specific critiques “enormously underestimate and misdescribe, the richness of structure, the particular and detailed properties of grammatical form and organization that must be accounted for by a “language acquisition model,” that are acquired by the normal speaker-hearer and that appear to be uniform among speakers and across languages” (179-180).
2. Putnam’s computational claims concerning grammatical simplicity are unfounded (181-2).
3. There is no argument for Putnam’s claim that “general multipurpose learning strategies” are sufficient to account for G acquisition and there is no current reason to think that any such exist when one looks at the grammatical details (184-5).

These are all closely related points, and Pullum is correct in suggesting that these points have repeatedly reappeared in critiques of GG. Thus, it is still true that simplistic views of what is required for G acquisition rely on underestimating and misdescribing what must be explained. It is still true that claims made on behalf of general learning strategies eschew the hard work of showing how the many “laws” linguists have discovered over the last 60 years are to be acquired without quite a bit of what looks like language specific software. Pullum is right: the critics have repeatedly picked up Putnam’s objections even after these have been shown to be inadequate and/or beside the point. Putnam has indeed been influential, and we are the worse for it.

Let me lightly elaborate on these three points.

Critics regularly avoid the hard problems. For example, look at virtually any takedown in the computer science literature of, for example, structure dependence, and you will observe this (see here and here for two recent reiterations of this complaint). All of these miss the point of the argument for structure dependence by concentrating on easily understandable illustrative toy examples intended for the general public (Reali and Christiensan) or misconstruing what the term actually denotes (Perfors et. al.).

I have said this before and I will do so again: GGers have discovered many non-trivial mid level generalizations that are the detailed fodder that fuel Poverty of Stimulus (PoS) arguments that implicate linguistically specific structure for FL. There can be no refutation of these arguments if the generalizaions are not addressed. So, Island Effects, ECP effects, Binding effects, Cross Over effects etc. constitute the “hard problems” for non-domain specific learning architectures. If you think that a general learner is the right way to go you need to account for these sorts of data. And there is, by now, a lot of this (see here for a partial list). However, advocates of “simpler” less domain specific accounts (almost) never broach these details, though absent this the counter proposals are at best insufficient and at worst idle.

It seems that Putnam is the first in a continuing line of critics that have decided that one can ignore the linguistic details when arguing against undesired cognitive conclusions. As Chomsky notes contra Putnam, there is more to phonology than a “short list of phonemes” from which languages can choose (e.g. there is also cyclic rule application) and there is more to syntax than proper names (e.g. there are also Island effects). Putnam failed to engage with the details (as discussed in Chomsky’s work at the time) and in doing so established a tradition that many have followed. It is not, however, a tradition that anyone should be proud to be part of, whatever its pedigree.

Putnam advanced another argument that is sadly still alive today. He argued that invoking innateness doesn’t solve the acquisition problem, but only “postpones” it. What’s this mean? The argument seems to be that stuffing FL with innate structure is explanatorily sterile as it simply pushes the problem back one step: how did the innate structure get there?[1] Frankly, I find this claim philosophically embarrassing. Why?

One of the main professional requirements of a card-carrying philosopher is that her/his work clarify what point an argument is aiming to make; what question is it trying to answer? Assuming an FL that is structured with domain specific linguistic structure addresses the question of how an LAD can acquire its language specific G despite the poverty of the relevant PLD (here’s Chomsky 184-5: “Invoking an innate representation of universal grammar does solve the problem of learning (at least partially), in this case.”) If such a UG structured FL suffices to solve the PoS problem it raises a second question: how did the relevant (domain specific) mental structure get there (i.e. why is FL structured with language proprietary UGish principles). Note, these are two different questions (viz. “what FL is required to project a GL from PLDL?” is different from “how did the FL we in fact have get embedded in our mental architecture in the first place?”). Consequently failing to answer the evolutionary questions concerning the etiology of rich innate mental structure does not imply a failure to answer/address the question of how an individual LAD acquires its GL.

Of course, it is not an irrelevant either, or might not be. If we could show that a given domain specific FL could not possibly have evolved then we are pretty sure that the postulated innate mental mechanism in the individual cannot be a causal factor in G acquisition. After all, if such an FL cannot be there then it isn’t there and if it isn’t there then it cannot help with the acquisition problem. But, and this is very important, nobody has even the inklings of an argument against the assumption that even a very rich domain specific FL could not have arisen in humans. Right now, this impossibility claim is at best a hunch (viz. an ungrounded prejudice). Why? Because we currently have very few ideas about how any cognitive structures evolve (as Lewontin has famously noted). Indeed, even the evolution of seemingly simple non-cognitive structures remains mysterious (see here for a recent example). So, any confident claims that even a richly domain specific FL is evolutionarily impossible is not on the cards right now and is thus a weak counter-argument against an FL that can solve the acquisition problem.[2]

A sidebar: now of this is meant to imply that this evolutionary question is uninteresting. I am an unrepentant Minimalist and take seriously the minimalist problematic: how could an FL such as ours arisen in the species. As such I am all in favor of purging FL of as much UG as possible and trading this for general cognitive mechanisms. However, because I consider this an interesting problem I resist fiat solutions; you know, bold yet vacuous declarations that a general learner can do it all without any detailed indications dealing with specific claims resting on bland assurances that it is in principle possible. I like the question so much that I want to see details; actual explanations engaging with specific proposed UG structures. I love reduction, I just don’t like the cheap variety. So derive your favorite UG based accounts from more general principles and watch me snap to attention.

BTW, Chomsky makes just this point as early as 1972. Here is a quote from his discussion of Putnam (182):

I would, naturally, assume that there is some more general basis in human mental structure for the fact (if it is a fact) that languages have transformational grammars; one of the primary scientific reasons for studying language is that this study may provide some insight into general properties of mind. Given those specific properties, we may then be able to show that transformational grammars are “natural.” This would constitute real progress, since it would enable us to raise the problem of innate conditions on acquisition of knowledge and belief in a more general framework. But it must be emphasized that, contrary to what Putnam asserts, there is no basis for assuming that “reasonable computing systems” will naturally be organized in the specific manner suggested by transformational grammar.

One might argue that Chomsky’s version of minimalism is his way of making good on Putnam’s computational conjecture, though I doubt that Putnam would see it that way. At any rate, Minimalism starts from the recognition that domain specific FLs can solve standard linguistic acquisition problems (i.e. PoS problems) and then tries to reduce the linguistic specificity of the various principles. It does not solve the domain specificity problem by ignoring the relevant domain specific principles.

One more point and I end. In his reply to Putnam Chomsky outlines a very reasonable strategy for eliminating domain specificity in favor of something like general learning.[3] In his words 184):

A non dogmatic approach to this problem [i.e. the acquisition of language NH] can be pursued, though the investigation of specific areas of human competence, such as language, followed by the attempt to devise a hypothesis that will account for the development of such competence. If we discover that the same “learning strategies” are involved in a variety of cases, and that these suffice to account for the acquired competence, then we will have good reason to believe Putnam’s empirical hypothesis is correct. If, on the other hand, we discover that different innate systems…have to be postulated, then we will have good reason to believe that an adequate theory of mind will incorporate separate “faculties,’ each with unique or partially unique properties.

See here for another discussion elaborating these themes.

To sum up: The problem with Putnam’s philosophical discussions of linguistics is that they entirely missed the mark. They were based on very little detailed knowledge of the GG of the time. They confused several questions that needed to be kept separate and they philosophically begged questions that were (and still are) effectively empirical. The legacy has been a trail of really bad arguments that seem to arise zombie like despite their inadequacy. Putnam wrote many interesting papers. Unfortunately his papers on linguistics are not among these. Let these rest in peace.[4]



[1] There are actually two points being run together here. The first is that any innate structure whether it is domain specific or not begs the explanatory question. The second is that only a domain specific “rich” FL does so. The form of the argument Putnam presents applies to either for both call for an evolutionary account of how the mental capacities arose. Humans might after all have a richer general cognitive apparatus than our ape cousins and how it arose would demand explanation ever if it were not domain specific. However, the thinking usually is that only domain specific richness is problematic. In what follows I abstract from this ambiguity.
[2] Gallsitel has noted that cognitive domain specificity is biologically quite reasonable (see here for discussion and links).
[3] See here for another discussion along the same lines channeling Reflections on Language
[4] Perhaps it is not surprise that Dan Everett loved this Pullum post. In his words: “Glad you noticed this! He was indeed one of the best of the last 100 years.” This comment does not indicate what Everett found so wonderful, but given the topic of the Pullum’s post and Everett’s own added confusions to the philosophical issues, it is reasonable to assume that he found the Putnam critiques against domain specific nativism compelling. But you knew he would, right?

Sunday, September 6, 2015

Two shortish perusables

Here are a couple of things that crossed my desk this week that you might find interesting.

The first (here) is from Christoph and lifted from the comments section of this. The piece is a short comment by Hilary Putnam (who, btw, was the head of my thesis committee) on the “innateness hypothesis.” He reiterates why he rejects the hypothesis. It comes down to the mis-assertion that Chomsky’s version of FL/UG requires that it “provide for all possible meanings, which Putnam takes to be ridiculous because Chomsky specifies no “mechanism [that] could have endowed the brains of primitive men and women with such ‘particular meanings’ as “quantum potential” and “macroeconomic” or with terms by means of which they could be defined, if indeed, there are more elementary terms in which this could be done.” The absence of a proposed mechanism makes it clear to Putnam that the ‘innateness hypothesis’ must be wrong. That’s the argument. Unfortunately, it’s really very bad.

Philosophers love this kind of argument from incredulity. Putnam’s main current contribution to the “debate” is to point to concepts like macroeconomic rather than carburetor as the ones that are particularly problematic. But the claim has the familiar surely-you-don’t-mean-to-say form so beloved of philosophers. The unfortunate thing is that Putnam, I believe, has simply misunderstood both Chomsky’s and, more relevantly, Fodor’s positions on these matters. Let me explain.

First a small terminological point. As Chomsky has often remarked, it is unclear what the ‘innateness hypothesis’ is supposed to be. It cannot be that anyone doubts that minds come equipped with innate structure. Everyone assumes that the mind/brain has given structures and operations that guide/bias learning/acquisition. Truly blank slates stay blank. The question has never been whether minds/brains have innate structure but what is innate, what kinds of generalizations are minds/brains predisposed to make so that when confronted with input they generalize beyond it? Everyone thinks that there is something. The question is what. Chomsky’s simple point is and always has been that there is every reason to think that in the domain of language, the mind/brain has methods of generalization specific to linguistic forms and that any kind of simple associationism built around mere sensory input has not, will not and cannot work. If there is an innateness hypothesis worth discussing, it is the specific suggestion that language competence relies on language specific mental capacities and cannot be reduced entirely to other cognitive capacities. And this requires discussing details, something that your average friendly famous philosopher of language has rarely (never?) done.  

Second, if this is what Chomsky intended, then it’s clear that Putnam’s observations don’t bear on it. Specifically, so far as I know, Chomsky has had very little to say about where concepts or lexical meanings come from. In fact, so far as I know, nobody (including Putnam) has any idea of how concepts arise in minds. Chomsky has repeatedly said as much, pointing to the human capacity for lexical acquisition as being a mystery.  So, whatever Putnam is saying here, it bears less on Chomsky’s views (which have largely been confined to claims about syntactic structure (catchy phrase huh? Maybe a good book title?). Indeed, since the very earliest days of GG, Chomsky has been very very very circumspect in his discussion of lexical meaning and how much we understand about it. My reading of his discussions of these matters is that Chomsky’s main conclusion is a negative one; that whatever lexical meaning is, it’s not just a matter of referential dependency (see here for some discussion).

But, third, maybe the target of Putnam’s comments is not Chomsky but Fodor. Fodor has made an argument that family resembles the one that Putnam summarily dismisses. But, even as regards Fodor, I think that Putnam mistakes the claims.  Very briefly, what Fodor argued is that the only theories of induction we have are “selection” theories. What I mean is that they understand learning as the inductive fixation of belief given a space of possible hypotheses. Induction works by moving one around this space of possibilities and, if there is enough data of the right kind, settling in one part of the space or another (see here for discussion). Thus, what we have are theories of belief fixation given a space of possible concepts/beliefs. Given that this is what we have, there is a sense in which you cannot possibly acquire anything that you don’t already have the wherewithal to conceptually represent. That was Fodor’s first point.

He coupled it with a second series of arguments that denied that most lexical meanings were decompositional (something that the Putnam quote above seems to agree with). So, if most word meanings cannot decompose and fixation requires representation of the concepts that are fixed then there is a sense in which the meaning of ‘carburetor’ is there in the mind from the get-go.  This is Fodor’s argument.

Now Putnam clearly dislikes the conclusion. Say he is right, that it is a reductio whose premise must be false. What does that tell us. Well it implies that there must be some other theory of learning besides the inductive ones that we all know and love. Recall that Fodor’s argument is that inductive learning theories imply all the fixable concepts are (in one sense) innate. So if you don’t like this conclusion you must show that either inductive learning theories do not presuppose hypothesis spaces (or analogues thereof) contrary to what Fodor noted, or that there are other theories of learning that are non-inductive that explain how we acquire words and meanings. In other words, if you don’t like the conclusion then you either need to show where Fodor’s description of induction fails or suggest that induction is not the only way to learn and to provide an outline of the other kinds. Putnam does neither.

Curiously, I think that Fodor might agree with the second option. In the Modularity of Mind, if I recall correctly, Fodor suggested that only modular cognitive systems are amenable to current investigation. One way of reading this is that only informationally restricted modular domains are ones where the hypothesis space/inductive procedure story can be made to work. Moreover, Fodor is on record opposing the conception of the mind as massively modular (i.e. made up of endless numbers of small modules) and thinks that something else (he knows not what) is going on in central system cognition. It is consistent with Fodor’s views that lexical acquisition is not inductive and so there is some other way that concepts are acquired. But, and this is key, he does not have the remotest inkling as to what this other process might be and, so far as I can tell, neither does Putnam.

Putnam also throws in some comments on Chomsky and Fodor’s views about evolution but it is entirely unclear what any of the (misnamed) innateness controversy has to do with evolution. As regards lexical acquisition, for example, Fodor is not against our conceptual repertoire changing over time, he is against it changing by induction/learning over time.

Sorry to have gone on so long. Putnam’s remarks are not new, as he himself points out. It seems, however, that his current views miss the mark as much today as they did when first advanced them. The more things change…

Here is a second paper on referentially ready minds. The paper is in Nature and two of the authors should be well known to linguists. It makes, to my mind, the modest point that kids are ready to take language as an indicator of referential intent when accompanied by other behavior (e.g. eye gaze). It seems that even very young kinds show indications of thinking that language use goes hand in hand with referential intent.


One question I had is what I am supposed to take away form this? What does it tell us about acquisition? Is the suggestion that establishing referential value is an important factor in language acquisition? If so, how big a factor? Bigger than distributional analysis? Is being reference ready a critical pre-condition for language acquisition? Is the supposition that reference is what meaning consists in (if so, see Chomsky’s relevant remarks on this topic linked to above). I am not sure. So, let me ask you: what’s the take home message here and why is what the paper argued for important? BTW, this is a sincere question: what’s the overall take home message? That language can be used referentially and that kids come natively equipped to believe this? Or is there something more going on here?