There has been a bit of a kerfuffle in the thread to What’s Chomsky Thinking Now concerning
Fodor’s claim that all of our concepts are innate. Unfortunately, with the exception of Alex
Drummond, most who have participated in the discussion appear unacquainted with
Fodor’s argument. It has two parts, both
of which are interesting. To help focus the indignation of his critics, I will
outline them below as a public service.
Before starting however, let me share with you my own personal rule of
thumb in these matters, one that I learned from Kuhn’s discussions of
Aristotelian physics: when someone very smart says something that you think is
obviously dumb then go back and reread it for it may be that you have
thoroughly misunderstood the point. I am acquainted with Jerry Fodor. Let me
assure you he is very smart. So let’s start.
As noted Fodor has a two pronged argument. The first part (an excellent very short form
of which can be found here 143ff) is an observation about learning as a form of
inductive logic. Fodor distinguishes between theories of concept acquisition
and theories of belief fixation. The latter is what theories of learning are
about. Learning theories have nothing general to say about concept acquisition
because, being inductive theories, they presuppose
the availability of a set of basic concepts without which the inductive
learning mechanism cannot get off the ground.
If this all sounds implausible, considering Fodor’s example will make
his intent clear.
Consider someone learning a new word miv in a classical learning context. The subject is shown cards, some of which are
miv and some non-miv. The subject is given a cookie whenever s/he correctly
identifies the miv cards and is hit
by a bolt of lightening when s/he fails (we want the reinforcement here to be
very unambiguous). What does the subject
do, according to any classical learning theory, s/he considers a hypothesis of
the form “X is miv iff X is…”, the
blank being filled in with a specification of the features that are criterial for
being a miv. The data is then used to assess the truth of
the hypotheses with various values of “…”.
So if miv means “red and
round” then the data will tend to confirm “X is miv
iff X is red and round” and disconfirm everything else. This much Fodor takes
to be obvious. If learning is a form
of inductive inference (and, as he notes, there is no other theory of
learning), then it takes the indicated form.
Fodor then asks where do the hypotheses that are tested come
from? In other words, where do the fillers of “…” come from? They are GIVEN. Inductive theories presuppose
that the set of alternatives that the data filter are provided up front. Given
a hypothesis space, the data (environmental input) can be used to assign a
number (a probability) of how well that hypothesis fits the data. What inductive theories don’t do is provide
the hypothesis space. Another way of
making the same point is that what inductive logics (i.e. learning theories) do
is explain how given some input the user of that logic should/does navigate the
hypothesis space: where’s the best place to be given that the data has been
such and so. However, if this is what
inductive logics do (and, I cannot repeat this enough, all learning theories
are species of inductive logics), then the field of concepts used by the
inductive logic cannot themselves be fixed by the inductive logic. Or as Fodor puts it (147):
You have to be nativistic about the
conceptual resources of the organism because the inductive theory of learning
simply doesn’t tell you anything about that – it presupposes it – and the
inductive theory of learning is the only one we’ve got.
So, Fodor’s argument amounts to pointing out what everybody
should be nodding in agreement with: no induction without a hypothesis space. If the inductive theory is a theory of
learning, then the hypothesis space must be innate and that means that all the
concepts used to define it must be innate as well. As I said, this part of the argument is
apodictic, cannot be gainsaid and, in fact, never has been. Even Quine, a
rather extreme associationist, agreed that everyone
is a nativist to some degree for without some nativism (enough to define the
hypothesis space) there can be no induction and hence no learning. Fodor’s
point is to emphasize this point and use it against theories that suggest that
one can bootstrap one’s way form less conceptually complex systems of
“knowledge” to more complex ones. If
this means that one can expand one’s hypothesis space by learning and
‘learning’ means induction then this is impossible.[1]
None of this should be surprising or controversial.
Controversy arises with respect to Fodor’s second prong of the argument. He
takes the concepts words tag to be effectively atomic. Another way of making
this point in the domain of language is that there is no lexical decomposition,
or at least very very little. Why is this assumption so important? Because the
relation between the input and the atomic features of the hypothesis space is
causal, not inductive. You see a red thing and +red lights up. Pure
transduction. Induction proceeds given
this first step: count how many of the lit features are red+round vs red+not-round,
vs green+round etc. So, for atomic
features/concepts the relation between their “lighting up” and the environment
is not one of learning (one doesn’t learn to light them up) it’s just a brute
fact (they light up). So, and this is an
important point, to the degree that most of our words denote atomic concepts
(i.e. to the degree that there is no lexical decomposition) to that degree
there is no interesting inductive theory
of concept acquisition. Note, this does not
preclude their being a possibly interesting causal theory, e.g. maybe being
exposed to a miv is causally responsible for triggering the concept miv or maybe being exposed to a dax is
causally responsible, or maybe being exposed to a miv right after birth is or
while being snuggled by your mother etc. The causal triggers might conceivably
be very complex and finding them may be very difficult. However, with resepct
to atomic features, one can only discover brute causal connections, not
inductive ones. Fodor’s point is that we should not confuse them as they are
very different. Recently Fodor has speculated that prototypes are causally implicated
in causally triggering concepts, but he insists, rightly given his strong
atomicity, that this relation is not inductive (See here).
To recap, the logic of the first argument is that primitive
concepts cannot be “learned” as they are presupposed for learning to take
place. This allows the possibility that one “learns” to combine these primitives
in various ways and that’s what concept acquisition is. Concept acquisition is just learning to form complex
concepts. Fodor is conceptually happy with this possibility. It is logically
possible that concept “acquisition” amounts to defining new concepts in terms
of the primitive ones. As applied to words (which I am assuming denote
concepts), it is logically possible that most/many words are complex
definitions. Logically possible? Yes. Actually the case? No, or that’s what
Fodor has been arguing for a very long time.
His arguments are almost always of the same form: someone
proposes some complex definition for a term and he shows that it doesn’t work. Indeed, very few linguists, psychologists or
philosopher have managed to provide any but a handful of purported definitions. ‘Bachelor’ may mean
unmarried man, but as Putnam noted a long time ago, there are not many words
like it.
Fodor is actually in a good position to understand this
point for he along with Katz once investigated reducing meanings to feature
trees. David Lewis derided this “markerese” approach to semantics (another
instance of be careful what you hurl as it may boomerang back at you (see Paul
on Harman on Lewis here)), but what really killed it was the realization that
virtually all words bottomed out in terminal faetures referring to the very
concept that the featural semantics was intended to explicate. So, e.g. the
markerese representation for ‘cat’ ended up having a terminal CAT. This clearly
did not move explanation forward, as Fodor realized.
So is Fodor right about definitions? Well, I am slightly less skeptical than he is
about the virtues of decomposition, however, this said, I cannot find good
examples showing him to be wrong. As the first part of his argument is
unassailable, then those that don’t like the conclusion that ‘carburetor’ is
innate (i.e. a primitive of our conceptual hypothesis space) had better start
looking for ways of defining these words in terms of the available
primitives. If past history is any guide,
they will fail. Definitions in terms of
sense data have come and (happily) gone and cluster concepts, once considered
seriously, have long been abandoned. There is a little industry in linguistics
working on argument structure in the Hale-Keyser (HK) framework, but, at least
from where I sit, Fodor has drawn significant blood in his debates with HK
aficionados. Suffice it for now to repeat, that this is where the action must be
if Fodor is to be proven incorrect and the ball is clearly not in his court. It is easy
to show that he is wrong, viz. show that most/many words denote complex
concepts. How to show Fodor is wrong is
easy. Showing that he is has proven to be far more challenging.[2]
So that’s the argument. The first step is clearly correct.
All the action concerns the second. One
further point: there has been a lot of discussion in the thread that Fodor is
advocating a nutty kind of nativism that eschews learning from the environment.
As should be clear, this is simply false. If word learning is belief fixation
then it can be as inductivist as you like. However, if word learning is concept
acquisition then the question entirely revolves around the nature of the
primitives concerning which everyone
must take as innate and hence not acquired. Fodor’s bottom line is that
hypothesis spaces are not acquired but presupposed and that as a matter of fact
there is far less definition one might have supposed. That’s the argument; fire
away!
[1]
Alex Clark mentioned Sue Carey’s recent book that appeared to consider this
bootstrapping possibility. Gallistel reviewed her book making effectively this
point that induction/learning cannot expand a hypothesis space (here).
To repeat, all that such theories show is how to most effectively navigate this
space given certain data.
[2]
One interesting avenue that Paul has been exploring revolves around Frege’s
notion of definition. For Frege
definition changed a person’s cognitive powers. This is really interesting.
Paul’s work starts from Jeff Horty’s discussion of Frege’s notion (here and considers how to extend it to theories of meaning more generally (c.f. here
and here).
Although I am also impressed by Jerry's formidable intellect, his massive innateness hypothesis is generally considered ridiculous (a qualification used by Susan Carey and many others). The argument is circular because it assumes what it is supposed to establish: that the hypothesis space presupposed by inductive learning is innate. The false, somewhat hidden, premise (but explicitly stated by you yourself) is that concepts are the denotata of words. However, concepts are not denotata of words but of INTERPRETATIONS (interpreted words if we limit ourselves to words). Interpretations of words crucially involve both the information associated with the word itself and with the context. So, to the extent that hypothesis spaces consist of concepts, they are co-determined by context. Concepts, in other words, are not immutable, because of their partial contextual nature. This is in tune with our everyday experience that hypothesis spaces change over time, both for the individual and history at large.
ReplyDeleteIt doesn't presuppose what needs to be established. It simply notes that if learning is inductive then there needs to be a space of options given. There is no inductive theory that does not begin with a specification of the hypothesis space. You cannot count what you don't have units for. Susan does not like this conclusion, but frankly have never understood how bootstrapping was supposed to allow you enlarge your hypothesis space. Induction cannot do this. Period. If the right target is not in the space of options there is nothing you can do inductively to put it there. This is not a deep point, but its consequences coupled with atomism can be significant. Treat this like Zeno's paradoxes if you don't like the conclusion: find a way around the conclusions, don't pretend the argument is not good.
DeleteAs for Fodor's argument in its simple form, we do not disagree. Inductive learning presupposes a hypothesis space, fine. What we disagree about is the nature of the hypothesis space. Implicitly, we only consider two options: 1) a weakly constrained, open hypothesis space (Quine, Skinner), 2) a heavily constrained, closed hypothesis space (Fodor, Chomsky). Insofar as we all reject 1) (and I certainly do), we might conclude that 2) is the only game in town. Wrongly so! There is an obvious third option: 3) a heavily constrained, open hypothesis space. What opens up the (heavily constrained) hypothesis space? Its contextuality, in my view. Think of the logic of *application*: even with one operating system you can have an unlimited set of apps compatible with the operating system. In other words, even with a finite (possibly small) number of basic brain structures, the set of their applications is not so limited. Actual hypothesis spaces do not only involve the (possibly small) set of innate structures (whatever that means), but also a record of their applications in ever changing contexts.
DeleteThe other problem is that is it really clear that definitions/lexical decompositions fail any worse than generative grammars do, especially relative to the size of the communities that work on them, & has Fodor ever thought seriously about the diversity of classifications for plants, animals and artifacts that appear to exist (the point of my Central Australian Bug Story in the thread that launched this). Anglo-European furniture terms might also be worth a close look - stools, chairs, sofas, benches etc and their non-equivalents in other languages.
ReplyDeletePlants are another issue for Fodor's view, since indigeneous Aussies really do distinguish all the plant species, and so do all other cultures that live off the land, I suspect. It is amazingly implausible that the putatively innate, non-decomposable inventory of plant types, presumably evolved in a small part of Africa, should turn out to be able to match almost perfectly to the plant species found in any part of the world.
ReplyDeleteWhereas, of course, a combination of Marr and/or Wierzbicka, with neural net training for colors has no in-principle difficulty here; with attention, there's always something you can notice, eg the putatively identical long skinny leaves turn out to be flat at the base for one species of an 'indistinguisable' pair, round for the other.
Sorry Avery, but I don't see your point. Fodor's point is about the nature of induction. It requires a set of admissible hypotheses. Where do these come from? And given standard wisdom, all you can do is investigate the presupposed/given set of options. Fodor is asking what it means to be "given." It sure looks like this means that the options are innately specified and what you do with the data is select the best one. But selection theories, this is one of those, presuppose that the answer is given in the options. You cannot learn what is unlearnable. A necessary condition of learnability is being in the hypotheses space. What can this mean other than you "know" all the possible options.
ReplyDeleteYou say: "Fodor is asking what it means to be "given." It sure looks like this means that the options are innately specified"
ReplyDeleteThe problem with this answer is that unless you believe the innately specified is god-given you have not really answered the question, just postponed it. As naturalist you still need to ask "why and equally importantly how is whatever you term as 'the given' innately specified. As we have discussed a few times now, we are currently in no position to answer these questions in biological terms. The only people who have an answer are those who assume 'God did it' - that's one reason Fodor is so popular with the religious ID people [especially after his 'Against Darwinism/What Darwin got wrong' [no matter how loudly he stresses his view is not religiously motivated].
If Fodor's point is that the hypothesis space is innate, why is he bothering to state it at all, since that fails to distinguish him from Quine and most other people, including the generative semanticists who were the original targets of his attack on lexical decomposition. Part 1) in your lucid summary is not something that many people disagree with Fodor about, Part 2) seems empirically undersupported and not regarded as remotely plausible by people who actually work on lexical semantics.
ReplyDeleteIf everybody is misinterpreting Fodor, I suppose that the reason is that they assume he's proposing something fundamentally different from Ray Jackendoff, James Pustejovsky and Anna Wierzbicka, and that it is some kind of outgrowth or extension of his early critique of lexical decomposition, and we all appear to be struggling to perceive what that might be.
Another possible source of confusion is the analogy of lexical acquisition with the immune system pursued by Fodor & Piatelli-Palmerini, misleading because afaik the immune system doesn't display the semi-lattice type structure of NL concept systems, and doesn't seem able to zero in on specific antigens or classes of antigens with anywhere near the precision that lexical acquisition can zero in on species or higher-level classifications (if the critter has any combination of features that your perceptual can pick up on, you can learn a word for it, including attributes of parts such as the shape of the spines on the front legs). & there is no lexical acquisition analog of autoimmune generalizations, where the system gets stuck on a wrong generalization that can't be fixed.
ReplyDeleteA lot of people would have been unimpressed by the paper, then failed to pay much attention to the book, since it didn't start out with a retraction of the claims in the paper iirc.
The immune system responds in a rigid and sometimes catastrophic way, the plant and animal classification is flexible and always acquires the correct category given a suitable collection of exemplars, and the disposition to attend to them.
It seems to me that Sue (Carey) has phrased the issue best in her own (excellent) book "Origins of concepts": On page 20, she writes that "this book's ... thesis is that the explanatory challenge is met ... by bootstrapping processes." A few lines below on the same page, she wites, "To "bootstrap" means, literally, to pull oneself up by one's own bootstraps---something that is clearly impossible." Well, if it's impossible, then, it's impossible. If it's meant 'not literally', then, the account is metaphorical, but if it's metaphorical, the explanatory challenge has not been met (non-metaphorically)
ReplyDeleteAs philosopher I want to make a point about your comment. You say:
Delete""To "bootstrap" means, literally, to pull oneself up by one's own bootstraps---something that is clearly impossible." Well, if it's impossible, then, it's impossible. If it's meant 'not literally', then, the account is metaphorical, but if it's metaphorical, the explanatory challenge has not been met (non-metaphorically)"
The implication seems to be we should reject her account because of this flaw? If we reject Carye's account [on which i can't comment since I did not read her book] because the explanatory challenge has not been met (non-metaphorically) then, by parity of reasoning we also should reject Chomsky's 'Merge' account because he explicitly claims the metaphors of his account [of how biological brains can generate sets etc.] have not been spelled out [Chomsky 2012] and Postal [2009, 2012] has claimed it is impossible they literally do [a challenge not refuted to date]. So the situation is the same as what you describe for Carey.
Now if we accept [as most people here do] that in Chomsky's case the metaphor is actually helpful and think it would be a mistake to complain that the explanatory challenge has not been met (non-metaphorically), then consistency would require to accept the same for Carey's account. [There may of course still be OTHER reasons to reject Carey's account [again i have not read it]]
I was nodding in agreement with Norbert about the first half but there are some cases I thought of which seem to be where the hypothesis space changes.
ReplyDeleteSo say you are working with an infinite dimensional feature space -- you can't represent the set of features (obviously) so you do it all dually, using the 'kernel trick' -- tyou take the examples and work with a similarity measure between them (which reduces to the dot product in the feature space). So at any given point the set of hypotheses that you are considering is, say the set of (hyper-)planes definable using the examples you have seen so far. And that increases with time.
So I don't find that completely convincing -- I think Norbert would point out, and he would be right -- that the hypothesis space is the set of all planes in the feature space, and this doesn't change -- and yet this shows that there is maybe an ambiguity in what we mean by an innate hypothesis space. Is it just what can be represented? Or is it more restricted?
Because if it is the former, then the claim that the concept 'carburetor' or CARBURETOR is innate is just the claim that it can be represented by a human brain and not by a cat brain which doesn't seem very controversial.
It's also worth noting that there are substantial differences between Carey's and Fodor's use of the term 'concept'. Fodor uses it IIRC in the philosophers sense, so it has to be public and shared (between individuals and cultures), whereas Carey is using it in the psychologist's sense to mean some purely internal token.
I think the bugs are a better kind of example to think about that the carburetors because we already have some ideas about how to represent the shapes and colors that seem to constitute the distinguishing features between the types, infinite dimensional (Marr's) for the shapes.
ReplyDeleteDoing carburetors properly would involve a lot of thinking about how people think about technological artifacts, & they're not part of a classification system for roughly the same kind of stuff that exists in variant forms in all cultures, frequently with independent origins.
Avery asks whether Fodor is saying anything that Quine didn't. Interestingly, Fodor explicitly addresses this in Appendix 6A of Concepts. Conclusion:
ReplyDeleteSo I'm not saying what Quine said: though it may well be what he should have said, and would have said but for his Empiricism. I often have the feeling that I'm just saying what Quine would have said but for his Empiricism.
Fodor's point here is that Quine thought the hypothesis space was defined by an essentially sensory similarity metric ("he assumes the Empiricist principle that the innate dimensions of similarity, along which experience generalizes, are sensory"). Fodor, of course, doesn't.
As to why Fodor bothers to point out that the hypothesis space is innate, I assume that he does so because it's a necessary precursor to an argument for the incoherence/impossibility of concept learning. To belabor a point that I took Norbert to be making: Fodor has an argument that concept learning via hypothesis testing is impossible given the definitionist view of concepts. He's not pointing out that the hypothesis space is innately determined because this is supposed to be a surprising or controversial point in and of itself, but because it underwrites his argument. (After all, not everyone readily acknowledges the innate determination of the hypothesis space. See e.g. Jan above.) As someone who finds Fodor's argument quite persuasive, I'd like to see it explained how the bug examples etc. figure in a refutation of it.
No-one finds it intuitively plausible that concepts like ZEBRA or LASER are unlearned, and no-one can blame Fodor's opponents for giving our intuitions a good tease with CARBURETOR, TARMAC, ALUMINUM, and all the rest. But the unlearnability of these concepts is the conclusion of the argument. With respect to our own private hunches, we're no doubt all entitled to reject apparently sound arguments on the grounds that their conclusions are nuts. But it sure would be nice to know where the argument goes wrong.
This comment has been removed by the author.
Delete@ Alex Drummond
DeleteI do believe in an innate hypothesis space in some sense. I was only saying that the nature of the hypothesis space does not follow from Fodor's argument. For some further remarks on this, see my reply to Norbert above.
Delete
i find intuitively rather implausible that ZEBRA is unlearned but that's just a tiny quibble.
DeleteI real issue i want to raise concerns the innate hypothesis space. For someone like Descartes that's fine; he assumes a BENEVOLENT god designed that space and as a bonus gave us the means to explore it [our rational minds]. If one accepts Descartes' premises this is a very simple and elegant solution.
Given that most of us reject the God-premise something else must 'be responsible' that we have the hypothesis space we do and not some other that would be logically possible. So saying it's innate might be true but is not terribly informative. If the contraints are innate they are determined by biology [and 3rd factors as Chomsky likes to point out]. If we assume that evolution had anything to do with 'designing' the hypothesis space we ended up with, we have to ask which factors evolution can and cannot 'work' with. The value I see in Fodor's anti-Darwinism publications is that he points out that evolution could not have acted on some of the factors some proponents of evolutionary psychology focus on. [I just disagree with the far reaching conclusions Fodor draws from this]
I know i sound like a broken record but the only way to get past the obvious [there are some innate constraints] we need to do empirical work [most notably of the biological kind because we cannot end up with a hypothesis space that is not biologically realizable...]