There have been two kinds of objections to Fodor’s argument
in the thread of ‘Fodor on Concepts.’ Neither of them really engage with his
arguments. Let me quickly review them.
The first comes in two flavors. The first is that if he is
right that implies that the concept ‘carburetor’ is innate and this is batty.
The second is a variant of this, but invokes Darwin and the holy spirit of
biological plausibility, to make the same point: that putting ‘carburetor’
concepts into brains is nuts.
Let’s say we agree with this. This implies that there is something wrong
with Fodor’s argument. After all it has led to an unwanted conclusion. If so, what is wrong? Treat Fodor as the new
Zeno and let’s clarify the roots of the paradox. This would be valuable and we
could all learn something. After all, the argument rests on two simple
premises: that if learning is a species of induction then there must be a given
hypothesis space and that this hypothesis space cannot itself be learned for it
is a precondition of induction, hence not learned, viz. innate. The second
premise is that there are a lot more primitive concepts in the hypothesis space
than one might have a priori imagined.
In particular, if you assume that words denote concepts then given the absence
of decomposition for most words, then there are at least as many concepts as
words. Let’s consider these premises again.
The first is virtually apodictic. Why? Because all inductive
theories to date have the following form: of the alternatives a1…an,
choose the ai that best fits the data. The alternatives are given,
the data just prunes them down to the ones that are good matches with the
input. If this is so, Fodor notes, then
in the context of concept acquisition, this means that a1…an
are innate in the sense of not there as a result of inductive processes, but
available so that induction can occur.
Like I said, this is not really debatable unless you come up with a
novel conceptualization of induction.
The second premise is empirical. It is logically possible
that most of our concepts are complex combinations of simple ones, i.e. that
most concepts are defined in terms of more primitive ones. Were this so, then concept acquisition would
be definition formation. Fodor has argued at great length that this is empirically
false, at least if you take words to denote concepts. English words do not by
and large resolve into definitions based on simpler concepts. Again, this conclusion is not that surprising
given the work of Austin and the other ordinary language philosophers. They spent years showing that no two words
mean the same thing. The failure of the Fodor-Katz theory of semantic markers
pointed to the same conclusion as did the failure of cluster concepts to offer
any enlightenment to word/concept meaning.
If most words are definitions based on simpler concepts nobody has
really shown how they are. Note that
this does not mean that concepts fail to interrelate. It is consistent with
this view that there are scads of implicational relations between concepts.
Fodor is happy with meaning postulates, but they won’t suffice. We need
definitions for only in this way can we get rid of what I would dub the
“grandmother problem.” What is that?
How are you able to recognize your grandmother. One popular
neuroscience theory is that your grandmother neuron lights up. Every “concept” has its own dedicated neuron.
This would be false, however, if the concept of your grandmother were defined
via other concepts. There wouldn’t have to be dedicated grandmother neurons for
the concept ‘grandmother’ would be entirely reducible to the combination of
other concepts. However, this is only true if the concept is entirely reducible
to the other primitive concepts and only a definition achieves this. So, either
most concepts are definable or we
must accept that the set of basic concepts is at least as large as any given
lexicon, i.e. the concept for ‘carburetor’ is part of the innate hypothesis
space.
I sympathize with those who find this conclusion
counter-intuitive. However, I have long had problems getting around the
argument. The second premise is clearly the weaker link. However, given that we
know how to show it to be false, viz. provide a bunch of definitions for a
reasonable subset of words, and that fact that this has proven pretty hard to
do, it is hard to avoid the conclusion that Fodor is onto something.
Jan Koster has suggested a second way around Fodor’s
argument, but it is not one that I understand very well. He suggests that the
hypothesis space is itself context sensitive, allowing it to be sensitive to
environmental input. Here are two (perhaps confused) reactions: (i) in any
given context, the space is fixed and so we reduce to Fodor’s original
case. I assume that we don’t fix the
values of these contextual indices inductively. Rather, there are a given set of context parameters which
when fixed by context specify non-parametric values. Fixing these parameters
contextually is itself brute causal, not inductive. If this is so, I don’t see
how Jan’s proposal addresses Fodor’s argument.
(ii) As Alex Drummond (p.c.) observed: “It sure seems like we don’t
want context to have too much of an influence on the hypothesis space, because
it would make learning via hypothesis testing a bit tricky if you couldn't test
the same hypothesis at different times in different situations.” He is right.
Too much context sensitivity and you could never step into the same conceptual
stream twice. Not a good thing if you are trying to acquire novel concepts via
different environmental exposures.
Fodor
has a pretty argument. If it’s false, it’s not trivially false. That’s what
makes it interesting, very interesting.
Your job, Mr Hunt, should decide to accept it, is to show where it
derails.