Wednesday, August 27, 2014

Nativism, Rationalism and Empiricism-1

There are two different kinds of arguments for a Rationalist approach to the study of mind. The first, so far as I can tell, is virtually tautological. The second is quite substantive.  What are they? I’ll try to lay them out in a couple of posts. This one here discusses the “tautology.”

The tautological version is well laid out in the Royaumont conference papers (here) and how they relate to the innateness “controversy.”  I put the latter in quote marks because most of the participants (including and especially Fodor and Chomsky, the so-called hard core nativists) considered the idea that the mind has innate structure nothing a simple tautology. Indeed, this is how Fodor and Chomsky repeatedly refer to the “innateness hypothesis” (see e.g. 263, 268). It’s tautological that the mind (and brain) has structure and biases as without such there can be no induction whatsoever and it is taken for granted that biological systems are constantly inducing (viz. engaging in non-demonstrative inference).  This said, it is interesting to re-read the discussions for despite this general agreement, there is lots of intellectual toing and froing. Why? Because as Chomsky puts it (see Fodor’s version p. 268):

What is important is not just to see that something is a “tautology,” but also to see its import. (262)

What’s the import, as Fodor and Chomsky understood things?  That there is no “learning” without a set of given projectable predicates that undergird it. Or, more accurately, as Fodor puts it, “ the very idea of concept learning is confused” (143).  And the confusion? Two related, but importantly different, concepts have been run together; concept acquisition (CA) and belief fixation (BF). Regarding the former, we have no theory of how concepts are acquired. What we have are theories of BF, which are, at the most general level, inductive logics of various kinds, which, by their nature, presuppose a given set of projectable predicates and so cannot themselves serve as theories of CA.  Or as Fodor puts it:

…no theory of learning that anybody has ever developed is, as far as I can see, a theory that tells you how concepts are acquired; rather such theories tell you how beliefs are fixed by experiences – they are essentially inductive logics. That kind of mechanism, which shows how beliefs are fixed by experiences, makes sense only against the background of radical nativism. (144).

Fodor and Chomsky (and most of the other participants at the Royaumont conference I might add if the comments section is any indication) believe that the above is a virtual tautology. All theories of learning are selective (i.e. stories where the given hypothesis space proposes and the incoming experience disposes).[1] Where tautology ends and (some of the) hard work begins is to specify the set of projectable predicates that are in fact biologically/cognitively given (i.e. the shape and content of the hypothesis space that BFers actually bring to the “learning” problem).  To repeat, no given space of alternatives, no way for an inductive logic or theory of BF to operate. The account of what is given is (or is a very good part of) a theory of the relevant biases.[2]

Fodor and Chomsky pull several important consequences out of this tautology.

First, that many positions confidently explored in the cognitive literature are strictly speaking incoherent as expressed. Fodor discusses the “Piagetian” view that developmental conceptual change is a learning process in which learning replaces earlier conceptually weaker stages with subsequent conceptually stronger ones.  Fodor argues that this position is, very simply, conceptually untenable. It is not untenable that development involves a succession of stages where the ith stage is conceptually stronger than the ith-1 stage. Rather it is untenable that this development is a result of stronger concepts arising via induction (i.e. learning). Why, because for induction to be possible the conceptually stronger system has to be representable. But to be representable means that the concepts that represent it must be cognitively available (must already be in the hypothesis space). But if so, they cannot enter the hypothesis space by induction as they are already available for induction.  So, development cannot be a matter of CA via learning. 

Does this mean that we development cannot be a matter of stronger concepts being acquired over time? No. But it does mean that this process cannot be inductive (e.g. this scenario is compatible with “maturing” new concepts, just not learning new ones).  Note too, that this is compatible with treating development as a matter of new belief fixation. But recall that BF implies that the relevant concepts are given and available for computational use. Or, as Fodor puts it:

…a theory of conceptual plasticity of organisms must be a theory of how the environment selects among the innately specified concepts. It is not a theory of how you acquire concepts, but a theory of how the environment determines which parts of the conceptual mechanism in principle available to you are in fact exploited. (151)

In other words:

…fixation of belief along the lines of inductive logic…is one that assumes the most radical possible nativism: namely that any engagement of the organism with its environment has to be mediated by the availability of any concept that can eventually turn up in the organism’s belief. The organism is a closed system proposing hypotheses to the world, and the world then chooses among them in terms explicated by some inductive logic. (152)

To repeat, Fodor and Chomsky and virtually all the participants at the Royaumont conference take this to be tautological (as do I). The only theories of learning we have are theories of BF and these theories all presuppose that the stock of possible acquirable concepts is given.  Radical nativism indeed![3]

So far as I can tell, the logic that Fodor and Chomsky outlined well over 30 years ago has not changed. And, if this is correct, then the central problem in cognition, linguistics being a special case, is to adumbrate the relevant hypothesis space for any given domain. And the only way to do this is to investigate the acquirable in terms of the acquired and argue backwards. If BF is the name of the game, then what is presupposed had better suffice to deliver the concepts acquired, and once one looks carefully at what’s on offer, this simple requirement appears to rule out most of the most popular theories, or so Fodor and Chomsky (and I) would argue.

It is worth observing that this tautology was recognized by the great empiricist philosophers.  In this sense, the blank tablet metaphor generally associated with their theories of mind is unhelpful at best and misleading at worst.  The distinguishing mark of empiricism is not that the mind is unstructured (comes with no given hypothesis space) but that the dimensions of the hypothesis space are entirely sensory.  On this view, admissible concepts are either sensory primitive concepts or Boolean combinations of such.  This is a substantive theory, and, as Fodor notes, it has proven to be false.[4] Or as Fodor in his characteristic elegant way puts it:

I consider that the notion that most human concepts are decomposable into a small set of simple concepts –let’s say, a truth function of sensory primitives – has been exploded by two centuries of philosophical and psychological investigation. In my opinion, the failure of the empiricist program of reduction is probably the most important result of those two hundred years in the area of cognition. (268)

As many of you know, Fodor has argued that not only is there no possible reduction to a small number of sensory primitives, but that there is very little possible reduction at all, at least when it comes to our basic lexical concepts. I personally find his arguments against reductions rather strong.[5] However, whether one buys the conclusion, the form of the argument seems to me correct: if you want a restricted set of primitives then you are obliged to show how these can be used to build up what we actually observe. The empiricist restriction to a small base of sensory primitives failed to deliver the goods, therefore, it cannot be correct; it cannot be the case that our basic concepts are restricted to sensory primitives.

So, is nativism ineluctable? Yup.  So what’s the fight between Rationalists and Empiricists about? It’s about two things: (i) the shape of the hypothesis space: what are the primitive projectable predicates and how do they combine to deliver more complex predicates (e.g. what are the basic operations, primitives and principles of UG) (ii) how, given this space, are beliefs fixed (e.g. what is the relation between PLD and G)? Everyone is a nativist when it comes to CA. This is not controversial (or shouldn’t be). Empiricists are nativists that believe in a pretty spare hypothesis space. Rationalists are happy to entertain far more complex ones.  This difference has an impact on how one understands BF. I turn to this in the next post.

[1] The distinction between instructive and selective theories has a long history in the study of the immune system.  Here is a useful summary. Fodor’s point, which seems to me to be entirely correct (and obvious) is that all current theories of learning are selectionist and hence presuppose a fixed innate background of relevant alternatives.
[2] There may be room, in addition, for accounts of how to use incoming data to update the information that guides a learners movements across the given hypothesis space.  What kinds of evidential thresholds are there, how many competing hypothesis does one juggle at once, what are the functions that in/decrease a hypothesis’ credibility “score,” how many “kinds” of evidence are tabulated at once, does the credibility function treat all hypotheses the same or are some more privileged than others, etc.? These are all relevant concerns. But Fodor and Chomsky’s tautological point is that they all are secondary to the issue of what does the hypothesis space look like.
[3] This conclusion is still resisted by many. See for example, Gallistel’s review of Sue Carey’s book here.
            Others also seem to misunderstand the import of this. For example Amy Perfors (here) claims that Fodor’s point is “true but trivial” (132). This is taken to be a critique, but it is exactly Fodor’s point. As Perfors emphasizes: “…any conception which relies on not having a built-in hypothesis space is incoherent…” (128). This is a vigorous rewording of Fodor’s and Chomsky’s point.  It is curious how often one finds strongly worded criticisms of nativist positions followed immediately with these criticized positions offered as novel insights by the very same author, in the very same paper.
[4] Once again some have confused the issues at hand. Perfors (see above) is a good example. The paper contrasts Nativism and Empiricism (127). But if everyone is a nativist with respect to the requirement that for learning to be possible a hypothesis space must be given, then everyone must be a nativist, in Fodor and Chomsky’s sense.  The debate is not over whether we are nativists, but what kind of nativists we are (i.e. how rich a hypothesis space are we willing to tolerate).  The contrast is between Rationalists and Empiricists, the latter limiting the admissible predicates and operations to associationist ones. And, as Fodor notes (see immediately below), this is what’s wrong with Empiricism. It’s not the nativism, but the associationism that makes empiricism a failed program.
[5] I hate to pick on the Perfors paper (well, not really) but it demonstrates how cavalier critics can be when it comes to positions that they consider clearly incorrect.  The paper argues that Fodor’s critique can be finessed by simply understanding that one can have hierarchies of hypothesis spaces (132-3). Thus, contra Fodor, it is possible to treat elements of level N as decomposed of concepts of level N+1 and this gets all that Fodor criticizes but without the unwanted implicational consequences.  Maybe. But oddly the paper never actually illustrates how this might be done. You know, take a concept or two and decompose them and show how they operate to license the wanted inferences and block the unwanted ones. There are lots of concepts around to choose from. Fodor has discussed a bunch. But the Perfors paper does nothing even approaching this. It simply asserts that conceptual hierarchies gets one around Fodor’s arguments.  This is cheap stuff, really cheap. But sadly, all too common.


  1. I think this post is right on.

    Just one caveat: I think you could run the same argument even if you allow for simple concepts to originate from simple sensory impressions (or some array of distinctive features or whatever you want to call it). Though if you did it wouldn't buy you much because complex concepts aren't exhaustively the products of their constituents (simple concepts). I'm fairly sure Fodor goes on about this at length in his 'Hume Variations'.

  2. At long last we're back to humour.

    Norbert lambasts Amy Perfors because "oddly the paper never actually illustrates how this [getting concepts at level N by decomposing some from level N+1] might be done. You know, take a concept or two and decompose them and show how they operate to license the wanted inferences and block the unwanted ones. There are lots of concepts around to choose from".

    Bad Amy, how dare she! Yet, without missing a beat Norbert tells us that his favoured view [radical nativism] requires that we have "innately specified concepts" - without ever illustrating how those might be implemented. You know taking a concept or two and show how they are innately specified. But maybe I am hasty, we are promised "a couple of posts". So maybe finally we will learn with specific examples how these concepts got into our brains and exactly where they are located. A story analogue to the one that explains why we grow arms rather than wings ...

  3. Hmm yes. It seems to me that Anna Wierzbicka's 'explications' work on the whole about as well as generative grammars (namely, sort of, with issues), but trying to break down meanings into components is supposed to be stupid, while connecting forms to meanings with nicely computable functions is supposed to be smart.

    1. @Avery:
      I think I understand Fodor's argument differently than you do. His point is not that concepts (and words) might have analytical relations to one another. They might. It is, rather, that these relations are not definitional. He is happy enough with "meaning postulates" mediating inferences, just not definitions.

      This distinction, as I understand it, has one important implication: there is no way of reducing the basic stock of primitive concepts and building up complex ones from combinations of the primitives (i.e. carburetor concepts are not reducible to logical combinations of other more primitive concepts) . If 'dog' implies 'animate' it is not because the concept of animacy is "part of" the definition of 'dog.' If this is correct, then the basic inventory of primitive concepts is very large, indeed at lest as large as our stock of vocabulary items.

      Two last points:

      First, the implicational structures that meaning postulates (MP) license can be interpreted in either Rationalist or Empiricist terms. The MPs can be innate or acquired inductively. Thus, Fodor's views are consistent with lots of work done on finding the conceptual relations between concepts, and, in its basic form, his argument does not imply anything about how word meanings are fixed as beliefs. What Fodor is against is a definitional view of concepts, not one that sees them being intricately related.

      Second, this view was already implicit in the earliest Fodor and Katz work on markerese. It did not go unremarked that the markers defining a concept almost always bottomed out in a feature specific to that concepts. So, 'dog' was not merely +animal, +sentient, …+ four legged etc, but also, the very last feature was +dog. This makes Fodor's later point, for if definitions were possible +dog would not be necessary.

    2. Your understanding of Fodor's idea seems to be pretty much the same as mine, and I (and perhaps almost everybody else?) considers it implausible because of the intricate overlaps between the coverage of words in different languages.

      It is furthermore not so clear that definition/explication doesn't work at all, a lot has happened since the 1970s. It is a noticeable fact about lexical semantics that the meaning postulate idea seems to have led to no concrete results whatsoever, whereas decompositionalists such as Jackendoff, Pusteovsky and Wierzbicka and her group have done quite a lot. In _English: Meaning and Culture_ (2006) forex, W goes through the apparently unusually elaborated inventory of causative verbs in English (have, make, get to, let, V someone into something), describing various subtle differences with her 'primitives', to which I'd add the observation that 'cause' seems to be mostly used as a backoff in formal style when none of the others apply. Wierzbickian explications do not work perfectly, but neither do generative grammars, and indeed it might be the case that the explications work better than the grammars relative to the amount of person-hours that have been spent on them.

      It is of course essential to have the right primitives, the Wierzbickian set is in coverage not so different from that suggested by Margolis and Laurance in Alex' link below. For cognition there is for example THINK, SAY, FEEL and KNOW.

    3. Where meaning postulates do lead to somewhere is in algebraic semantics, and, indeed, some of the Wierzbickian primes seem to obey some a.s. type principles; I discuss some examples in my NSM & Formal Semantics paper recently uploaded to lingbuzz.

    4. @Avery
      I don't doubt that we have learned a lot about lexical items since the early 70s. What I am less clear about, and here it would be great if you could provide an example or two and maybe a short form elaboration (If you are interested in doing this, I would be happy to post a longish version as a post). This would benefit all of us (or at least me). But, back to the main point: I am not sure that Fodor's main point fails: explications are not definitions. The value of the latter is that they are able to entirely reduce a complex to a combination of primitives. If this were generally doable then we could argue that most of our concepts are derived. If we cannot, then we must conclude that there are more primitives than we thought. Curiously, I don't think that anyone finds the conclusion that most of our words/concepts are primitive particularly attractive. It does sound odd to say that 'carburetor' is innate. However, so far as I can tell, this is what one must assume unless definitional decomposition is viable. If one doesn't like the conclusion, either one must show how to decompose in general or come up with another view of induction (that's Fodor's argument IMO). Is this consistent with conceptual structure? Yes. Its just that the structure is not definitional. Here is what Fodor means by meaning postulates, it's a non definitional way of allowing for conceptual dependencies. Explications are a version of this, so far as I can see.

    5. Primitive does not imply innate.

      As I understand the current dialectic (which I don't follow that closely to be honest), Fodor has now retracted RCN in favour of the claim that concepts cannot be learned, though they need not be innate; he claims that they can be "acquired" through some process which does not count as learning (which he thinks is confused) since it is not rational or intensional or something in the relevant way.

    6. @Alex
      Correct. He always held this view, at least back to Royauomont. His argument is based on the structure of inductive theories. THET require a given set of concepts to operate. Non inductive development does not need this. Does this mean its not innate? Well, depends on what one takes innate to be. If one contrasts this with learned and by that one means acquired via induction then it is innate. The latter is a very labile concept. I generally use it in contrast to learning/induction as this is a dominant view of how mental development occurs (brains/minds change via learning/induction). However, this is pretty terminological and not worth spending much time on. The central point is Fodor's general claim, that I think is pretty widely accepted now, that learning/induction logically presupposes a given set of concepts.

    7. Yes, but ...
      So I have the concept SMARTPHONE, say. So how did I get it?

      A) You claim it is not learned and is therefore innate. So for many people, including me, that is a ridiculous claim.

      B) There is a tautological claim, which is that I have an innate intellectual endowment that allows me to acquire the concept SMARTPHONE in some way, on the basis of reading, using them, talking to people etc. Obviously any concept I have, I must have the innate ability to have it. But the ability to have a concept is not the same as having the concept, as presumably, Louis XIV for example did not have the concept SMARTPHONE, though he presumably would have had it, if he had been born recently.

      So I agree with B and I think A is absurd.

      Maybe this is just a terminological dispute about innate, in which case it would be good to clarify especially in the light of all of the papers (eg Mateli and Bateson) saying that we should avoid the term as it only causes confusion.

    8. This comment has been removed by the author.

    9. This comment has been removed by the author.

    10. Well I'd recommend pp 17-22 of this paper (from beginning of section 2.2):

      Reconciling NSM and Formal Semantics

      and probably at least the preceeding pages back to the beginning of section 2 on page 15 for a bit of orientation. Or perhaps even the whole thing ...

      One point is that 'explication' is a term that AW decided to use instead of 'definition', because it can in principle cover a somewhat wider range of factors, such as conditions of appropriate use. But Wierzbickian explications are supposed do all the work of definitions, and more.

      And, since they have been mentioned, here might be a rather quickly cooked up story about smartphones.

      smartphones are part of a structured semantic field, which seems to be organized like this:

      [No hypernym for communication devices]
      ..walkie-talkie, two way radio

      ....[landline phone] phone (mobie(Aus), cell (US))
      ......[feature phone (dumbphone?)]
      ....sat(ellite) phone

      The bracketted terms are secondary terms that are mostly not used, except to indicate a constrast with their sisters. The supercategories here are all 'basic' in that they can always be used; you can call your smartphone your 'phone', or your 'cell/mobie' (also your sat phone, if you have one, I think, could check this with someone).

      So we need to explicate these terms. For 'phone'(words requiring further explication are square-bracketted):

      1. A kind of thing. People [make] these things.

      2. If two people have these things, they can say things to each other when they
      are very far away from each other.

      3. This cannot happen when these things are not [connected] to a [network]

      3 distinguishes phones from walkie-talkies, and also from radio broadcasts, which are also distinguished by reciprocity conditions of 2. (which need more work, from an NSM point of view)

      For mobile phone:

      All of the above, plus:

      4. You can [easily] [carry] these things with you (in your pocket (?))

      5. They does not [need] to be [connected] by a [wire]

      (I think that somebody who had a mobile but didn't know about text messaging would still 'know what a mobile was', although not know a very important thing about it, so I've left texting out of the explication, although that is debateable). There's also a size issue, I suspect that a laptop with a simcard in it would not really count as a phone, not the term 'phablet' for things that are on the size-factor borderline between phones and tablets).

      For smartphone:

      5. These things can do many other things.

      6. [Using] these things, it is to many things

      (I think that the ease of finding things out is the critical feature of smartphones).

      You can say, of course, that there is always something a bit wrong with explications/definitions, but this is also true of generative grammars, so I see nothing fundamentally wrong with AW as opposed to NC here. & like generative grammars, explications appear to be able to be improved in their coverage. So, from these point of view, Fodor is like those people back in the sixties (Hall?) who noted some issues and failures of coverage of transformational grammars & started shrieking and wailing that the whole idea was hopeless and impossible.

    11. Alex C. and Avery: interesting comments! I think they show nicely that we create problems when we don't distinguish between two rather different projects:
      [1] search for precise definitions which provide sufficient and necessary criteria [e.g. ensure that all dogs and only dogs are referred to be the concept DOG]
      [2] discovering how children acquire concepts and how concepts are used by competent adult speakers.
      [1] is a project for formal linguistics and/or philosophy
      [2] is a project for psychology.
      Even though there is obviously some overlap between [1] and [2] one should not equate the two projects.
      [1] may turn out insolvable [partly for reasons given by Fodor] because it requires that concepts are defined [only] by other concepts. It is quite possible that we'll never find a definition of the form

      DOG = C1+C2+C3+...CN.

      But from that fact one should not conclude that therefore concepts like DOG must be innate. This is because when addressing [2] we have to also take into account that language use does not occur in isolation but within a rich context of non-linguistic information and relying on non-linguistic cognitive resources. Furthermore, [2] does not require that we all share the same concepts, just that there is enough overlap [e.g. I do not need to know what a satellite is to know what a smartphone is - so Andrew and I can both use SMARTPHONE].

    12. Yes, I broadly agree with that : the problem (in my opinion) is that the philosopher's notion of a concept and the psychologist's notion of a concept are just incompatible. And so an attempt to come up with a psychological theory of concept acquisition based on philosophical stipulations about what a concept is, is bound to lead to absurd conclusions. My rather uninformed feeling is that the problematic stipulation in Fodor's case is that they are public and shareable and thus we have strict identity relations between the concepts of different people speaking different languages in different eras.
      But the problem may well lie elsewhere.

    13. @Alex re:There is a tautological claim, which is that I have an innate intellectual endowment that allows me to acquire the concept SMARTPHONE in some way, on the basis of reading, using them, talking to people etc. Obviously any concept I have, I must have the innate ability to have it..

      Fodor notes this reply in his Royaumont participation on p151-2. He notes the banal position that you mention and says the following: The banal thesis is just that you have the innate potential of learning any concept you can in fact learn; which reduces, in turn, to the non-insight that whatever is learnable is learnable. …What I intended to argue is something very much stronger; the intended argument depends on what learning is like, that is the view that everybody has always accepted, that it is based on hypothesis formation and confirmation. According to that view, it must be the case that the concepts that figure in the hypothesis you come to accept are not only *potentially* accessible to you, but are *actually exploited to mediate the learning*…The point about confirming a hypothesis like "X is miv off it is red and square" is that it is required that not only *red and square* be *potentially* available to the organism, but that these notions be effectively used to mediate between the organism's experiences and its consequent beliefs about the extension of miv. …

      So, your remarks, do not appear to me to have yet engaged with Fodor's point. If inductive logics require given hypothesis spaces to get off the ground and if we attribute an inductive logic to a learner then we must also be attributing to them the given hypothesis space AND we must be assuming that it is in virtue of exploiting the properties of that space in fixing a belief.

      As per (1): so you don't like the conclusion. Fine, then show where it gets off the tracks. Treat it like Zeno's paradox and find a way of reconceptualizing the view where this consequence does not follow. Simply asserting that you find the conclusion absurd gets nobody anywhere. BTW, from what I can gather, Fodor is not too crazy about the conclusion either but is honest enough to admit that he sees no way around it given current assumptions about learning.

      Last point: I did clarify what I meant by innate. I contrasted it to learned. Learning is a rational process based on evidence. Maturation is not. Nor is genetic inheritance, nor is epigenetic inheritance. When I say that it is innate, I do not intend to imply that it is present from the get go (think puberty and baldness) nor that it is "in the genes" (I am happy with epigenetic causes or developmental accounts). I mean that the knowledge of interest is not fixed by experience but is traceable to some other causal mechanism. Of course, it is interesting to ask what specific causal mechanism it is. But, right now, though we can pretty easily eliminate some (e.g. learning of principle A) we cannot fix which innate factor is actually at play.

    14. The problem with the argument is generally taken to be the claim that the only things that count as learning are hypothesis testing.

      So I think that I acquired SMARTPHONE, through experience (e.g. seeing one, using one). I don't think that hypothesis testing was involved. I would call that learning; maybe you would call it acquisition since it doesn't involve hypothesis testing. But I just don't know what saying it is innate means.

      And if you reject the role of experience you get what Fodor calls the doorknob/DOORKNOB problem.

      Have a look at the Margolis and Laurence paper "Learning Matters", if you have time. I'd be interested in your take on the arguments there, since they are generally nativist.

    15. This comment has been removed by the author.

    16. I guess that knowledge by acquaintance is possible: Just "seeing." But I think that the standard view is that fixing a concept involves taking what one sees and processing it cognitively. If there is no such processing, then no learning. In fact, even for simple transduction, which is what you seem to be hinting at, there is a GIVEN concept that is the target of the transduction process. If this is so, then I don't see how just "seeing" gets one around Fodor's main point.

      This relates to another issue: EVERYONE (and I mean EVERYONE) assumes that experience is a necessary part of belief fixation. The question is not and never has been whether experience is required but what experience does. There is a difference between causally implicated and being part of a learning process. The latter is a special kind of causation. If a concept pops to my ind after I am hit on the head with a hammer, even if the two are causally linked, this is NOT learning. Now, the standard view has been that something like learning is what is causally responsible for our cognitive structures. I am happy to assume that this is false in many cases. But Fodor is assuming that this is taken to be trivially true and that even in cases of simple perception, there's lots of induction. And if there is no induction, just transduction, then there is still a mapping to given concepts.

      I will get to the paper soon. It's start of term here.

    17. @Christina: my posting above should have been more explicit about the fact that the definitions/explications are supposed to be formulated in terms of an innate set of semantic primitives (now called 'primes' in the NSM literature), together with terms previously defined ('molecules'), which appear in parantheses in my smartphone sampler.

      There are currently 65 proposed primes, & AW seems to think that that's pretty close to the complete set. They are motivated by difficulties in defining them in terms of other concepts, and also filtered by the requirement that they be arguably present as words or expressions in all languages, modulo various confounding factors. Prime sets have been set up for about 30 languages at this point, including ones typologically different and areally distant from English.

      So: learning meanings. This is supposed to happen by adding (& perhaps modifying) components of the explication until it fits usage. Cells/mobies are roughly similar in shape to wallets, but children do not observe people apparently having conversations via their wallets, so something about saying things gets into explication of cell/mobie but not wallet. (Perhaps on the basis of maximizing the extent to which the meanings of the words in the lexicion allow prediction of what people say.) Of course kids don't pick up the full phone system all at once, but even small ones probably know that landline ones have at least a station that stays in one place, while cells/mobies don't.

      Darwin's problem: if my speculation that the origin of language is in the organization and parsing of behavior, then the beginnings of the prime set could be as early as primitive reptiles (before the synapsid/diapsid split), since both mammals and birds have a mostly unknown but increasingly impressive degree of ability to both produce complex behaviors and understand what other animals (of the same and different species) are up to. AW has actually been trying to speculate about what the chimpanzee prime set might be; I think this is probably insane to attempt now, but otoh it might be a brilliant first step in figuring out how animals organize their own behavior and understand each other's (AW and NC have a number of properties in common, including a capacity to strike out in directions that strike their colleagues as completely crazy ...)

  4. Margolis and Laurence have written a lot on this recently -- e.g. In Defense of Nativism, from a broadly pro-Chomsky nativist standpoint. But they are very sceptical about Fodorian radical concept nativism. I recommend that paper and their Learning Matters.

    1. Thanks for the link, Alex C. Note that these authors look at a "nativism–empiricism debate" while Norbert tells us [rightly I think] that we are all nativists to some degree and refers to a rationalism - empiricism debate.

      Chomsky certainly claims that poverty of stimulus arguments [we know much more than we could have learned] apply to concepts, even simple ones like RIVER:

      "When I give examples in class like river and run these odd thought experiments [concerning the identities of rivers - what a person is willing to call a river, or the same river that you find in my work], it doesn't matter much which language background anyone comes from, they all recognize it in the same way in fundamental respects. Every infant does. So, somehow, these things are there" [SoL, 27]

      Though he also admits he doesn't have a clue about how 'these things' could have gotten encoded in the genome:

      "But then the question is, where did it come from? You can imagine how a genetic mutation might have given Merge, but how does it give our concept of psychic identity as the defining property of entities?".[SoL, 28]

      So I am looking quite forward to Norbert's demonstration of how that can be done for a few concepts

    2. These links were just meant to shed some light on the current views about the links between Fodorian Radical Concept Nativism (post LOT2), the argument about decomposability of concepts, and what we call Linguistic Nativism.
      The Royaumont symposium was a long time ago.

  5. After reading through all these comments I can't help but get the sense that nobody other then Norbert has actually read Fodor. I do suggest trying that. Something like his 'Hume Variations' or 'Doing Without What’s Within; Fiona Cowie’s Critique of Nativism' ( or his 'In Critical Condition'.

    I'm hesitant to respond to points made because the distortions of Fodor's positions (which I'm by no means convinced of wholesale) are enormous.

    Also: whats with this repeated mentioning that the 70's / the royaumont symposium was a long time ago? anybody with a flake of memory and attention can see that the history of the field of cognitive science (in all its pop names and forms) for the past 200 years has been jumping almost aimlessly from trend to trend. saying something is new and trendy in this corner of the pool should raise flags. revolutionary, substantive ideas just don't crop up that often.. or get supplanted very quickly.

    as fodor has been wont to point out, in inquiry of the mind, if you dont like whats going on, wait a while.

    So please: let's let the tv-commercial sized memory stop being a thing.

    1. LOT2 and Concepts are more directly relevant in my opinion. I mentioned that it is a long time ago as a way of saying that Fodor's views have changed a lot since then. And if Fodor is no longer defending the views, I don't see why I should waste any energy discussing them.

    2. The shorter pieces don't seem to me to present any significantly different story to lexical semanticists than the original 1975 one, so it is not surprising that they continue to suppose that Fodor is insane and get on with their work (the changes in the infrastructure don't alter the takehome message for the lexical semanticist).

      My suggested solution to the Fodor Problem is that word-meanings (what linguists and developmental psycholinguists study) are not concepts, but something similar to concepts but a little bit different, because they only have to apply or not apply to things found in the normal environment of the speakers of the language, rather than having true necessary and sufficient conditions for use.

      So if we're interested in the meaning of the word 'inteltye' in Arrernte (grasshoppers, excluding green ones, which are called something different, along with katydids), we're done when we can distinguish the critters found in Central Australia that are called 'inteltye' from those that aren't. We don't have to worry insects from the Amazon, and even less about whether convincing grasshopper-like robots or creatures from other planets descended from things that are sort of like plants would be called 'inteltye' or not.

      This makes the problem of defining animal kinds, for example, much easier. So the main characteristics of an inteltye are:

      a. it has a finger-like body
      b. the front part of the body is hard
      c. the back part of the body is soft
      d. it has four short legs for walking, two on either side of the hard part of the body
      e. behind these, on the hard part of the body, it as two long legs for jumping
      f. it can jump a long way
      g. it probably can fly
      h. if if it is big, it can give you a painful bite if you put your finger near its mouth
      i. it is not green

      That's almost it, I found one other creature a bit similar with a different name but I don't remember exactly what it looked like. I suspect that the green ones (nwekepeltherre iirc, it's been a long time) have the above characterization plus that they are green, and that the 'not green' is elsewhere blocking, as well as the one I don't remember.

      The terms used in this characterization all of close equivalents in Arrernte, and could plausibly be part of other definitions/explications in a non-circular system resting on the putatively innate primes a.k.a. primitives.

    3. A bit more on Fodor, Mandel & Wierzbicka:

      JF (Niyogi&Snedeker p 10): "You can't observe under a description unless you have the concepts that constitute the description. You can't see things as barking unless you have the concept barking."

      AW (Lexicography and Conceptual Analysis (1985:169; from the explication/definition of 'dogs', edited a bit by me to replace some Latinate words of the kinds she usually avoids with NSM primes)

      "they make a kind of sound, making a short loud sound many times
      each time opening the mouth wide and closing it rapidly again.
      it sounds as if they wanted everybody to know that they were there."

      So an Agent in possession of an appropriate prime set plus means for composing them can be thought of as having innately an infinite set of concepts, ordered in a complexity lattice, where 'animal' is (much) simpler than 'dog', and 'barking' is part of the dog concept (as well as the 'seal' concept).

      But, clearly, 'having' a concept in this sense is very different from having picked it out as something worth attending to and connecting it to a word. But hypothesis development and testing isn't clearly anything different from Perceptual Meaning Analysis. Perhaps what she means is that the development is almost all progressive refinement with very little backtracking or sidewise movement (e.g. "oops, I got cats and bunnies reversed, need to swap their descriptions!")

  6. Thanks for sharing, nice post!

    Phục vụ cho nhu cầu vận chuyển hàng hóa bắc nam bằng đường sắt ngày càng lớn, dịch vụ vận chuyển ô tô bằng đường sắt và vận tải, gửi hàng hóa gửi xe máy bắc nam bằng tàu hỏa bằng đường sắt cũng đã xây dựng nên những qui trình, dịch vụ vận chuyển container lạnh bắc nam chuyên nghiệp và có hệ thống. Đảm bảo mang đến chất lượng tốt nhất cho khách hàng sử dụng dịch vụ.