Tuesday, February 26, 2013

Fodor on Concepts Again


There have been two kinds of objections to Fodor’s argument in the thread of ‘Fodor on Concepts.’ Neither of them really engage with his arguments. Let me quickly review them.

The first comes in two flavors. The first is that if he is right that implies that the concept ‘carburetor’ is innate and this is batty. The second is a variant of this, but invokes Darwin and the holy spirit of biological plausibility, to make the same point: that putting ‘carburetor’ concepts into brains is nuts. 

Let’s say we agree with this.  This implies that there is something wrong with Fodor’s argument. After all it has led to an unwanted conclusion.  If so, what is wrong? Treat Fodor as the new Zeno and let’s clarify the roots of the paradox. This would be valuable and we could all learn something. After all, the argument rests on two simple premises: that if learning is a species of induction then there must be a given hypothesis space and that this hypothesis space cannot itself be learned for it is a precondition of induction, hence not learned, viz. innate. The second premise is that there are a lot more primitive concepts in the hypothesis space than one might have a priori imagined. In particular, if you assume that words denote concepts then given the absence of decomposition for most words, then there are at least as many concepts as words. Let’s consider these premises again.

The first is virtually apodictic. Why? Because all inductive theories to date have the following form: of the alternatives a1…an, choose the ai that best fits the data. The alternatives are given, the data just prunes them down to the ones that are good matches with the input.  If this is so, Fodor notes, then in the context of concept acquisition, this means that a1…an are innate in the sense of not there as a result of inductive processes, but available so that induction can occur.  Like I said, this is not really debatable unless you come up with a novel conceptualization of induction.

The second premise is empirical. It is logically possible that most of our concepts are complex combinations of simple ones, i.e. that most concepts are defined in terms of more primitive ones.  Were this so, then concept acquisition would be definition formation. Fodor has argued at great length that this is empirically false, at least if you take words to denote concepts. English words do not by and large resolve into definitions based on simpler concepts.  Again, this conclusion is not that surprising given the work of Austin and the other ordinary language philosophers.  They spent years showing that no two words mean the same thing. The failure of the Fodor-Katz theory of semantic markers pointed to the same conclusion as did the failure of cluster concepts to offer any enlightenment to word/concept meaning.  If most words are definitions based on simpler concepts nobody has really shown how they are.  Note that this does not mean that concepts fail to interrelate. It is consistent with this view that there are scads of implicational relations between concepts. Fodor is happy with meaning postulates, but they won’t suffice. We need definitions for only in this way can we get rid of what I would dub the “grandmother problem.” What is that?

How are you able to recognize your grandmother. One popular neuroscience theory is that your grandmother neuron lights up.  Every “concept” has its own dedicated neuron. This would be false, however, if the concept of your grandmother were defined via other concepts. There wouldn’t have to be dedicated grandmother neurons for the concept ‘grandmother’ would be entirely reducible to the combination of other concepts. However, this is only true if the concept is entirely reducible to the other primitive concepts and only a definition achieves this. So, either most concepts are definable or we must accept that the set of basic concepts is at least as large as any given lexicon, i.e. the concept for ‘carburetor’ is part of the innate hypothesis space.

I sympathize with those who find this conclusion counter-intuitive. However, I have long had problems getting around the argument. The second premise is clearly the weaker link. However, given that we know how to show it to be false, viz. provide a bunch of definitions for a reasonable subset of words, and that fact that this has proven pretty hard to do, it is hard to avoid the conclusion that Fodor is onto something.

Jan Koster has suggested a second way around Fodor’s argument, but it is not one that I understand very well. He suggests that the hypothesis space is itself context sensitive, allowing it to be sensitive to environmental input. Here are two (perhaps confused) reactions: (i) in any given context, the space is fixed and so we reduce to Fodor’s original case.  I assume that we don’t fix the values of these contextual indices inductively. Rather, there are a given set of context parameters which when fixed by context specify non-parametric values. Fixing these parameters contextually is itself brute causal, not inductive. If this is so, I don’t see how Jan’s proposal addresses Fodor’s argument.  (ii) As Alex Drummond (p.c.) observed: “It sure seems like we don’t want context to have too much of an influence on the hypothesis space, because it would make learning via hypothesis testing a bit tricky if you couldn't test the same hypothesis at different times in different situations.” He is right. Too much context sensitivity and you could never step into the same conceptual stream twice. Not a good thing if you are trying to acquire novel concepts via different environmental exposures.

Fodor has a pretty argument. If it’s false, it’s not trivially false. That’s what makes it interesting, very interesting.  Your job, Mr Hunt, should decide to accept it, is to show where it derails.

52 comments:

  1. Grammars also all leak, and might well do so for the same reasons as definitions/decompositions - they're complicated, functioning in an environment we don't understand, and we don't have quite the right inventory of pieces and composition techniques to put them together. The conclusion that all (language-particular) grammars are 'innate' in Fodorian terms seems to me to be about as well supported as the conclusion that all word-meanings are.

    ReplyDelete
  2. One way to meet Norbert's challenge is to plagarize Rob Stainton [and 2 of his students]:

    The final objection to Katz takes the form of a differential certainty argument. This is a kind of argument where one must choose between two propositions, typically one supported by abstruse philosophical reasoning, and one that seems immediately obvious. The strategy is to say, “Though we aren’t sure what it is, there must be something wrong with the abstruse argument, because what it seemingly supports is far less plausible than other things we know”. (An example: following Zeno, one might argue that because any distance can be divided in half, one cannot really walk across a room. Long before knowing what was wrong with this line of thought, a differential certainty argument can show that there has to be something wrong — for people cross rooms all the time.) [Iten et al., 2007, p. 237]

    So if we accept this then we should accept: Fodor's abstruse reasoning must be wrong even though we do not know what is wrong with it. q.e.d.
    [actually i think we can do better than this but this is one way to go]

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. I am normally happy to just sit back and learn from these exchanges, but now seems like an opportune time to chime in. First, Fodor usually takes as common ground among his critics that “Minds like ours start out with an innate inventory of concepts,” of which he adds “there are more than none but not more than finitely many” (2008: 131). Yet whatever this stock of pre-determined concepts happens to be, nobody in their right mind, including Fodor as I understand him, thinks that DOORKNOB and CARBURETOR are among them *if* “innateness” is taken to mean something like: programmed into a thinker’s genetic code and thus either (i) present at birth, or (ii) acquired some time later as a natural consequence of neurological/ontogenetic development.

    Notice that Fodor is not just playing games with the meaning of “innateness.” For there is something “genuinely” innate about concept acquisition, which is a cognitive mechanism that has evolved in creatures like us to enable minds so-constituted to acquire (as opposed to learn) concepts by becoming metaphysically locked to their referents. Precisely how this locking mechanism works, as Fodor readily admits, is anyone’s guess. Rather, he is merely relying on an inference to the best explanation which runs something like this: In order to satisfy the explanatory demands of traditional belief-desire psychology, the referential relations between concepts and their contents must be subsumable under broad psychological laws. Fodor’s wager is that these laws are embodied by an innate mechanism that facilitates concept acquisition.

    More specifically, Fodor (1998) maintains that there are basically two kinds of properties that minds like ours can become nomically locked to, corresponding to two ways of being locked to them. There are what he calls mind-dependent properties and those that are mind-independent. As the name suggests, mind-dependent properties, or MDPs for short, depend for their existence on the psychological laws that govern how things that have them appear to us. Or stronger still, as Fodor puts it, MDPs are constituted by psychological laws that govern the way their instances “strike our kind of minds as being.” Paradigm examples of MDPs are sensory properties such as being red, which he claims are “constituted by the sensory states that things that have it evoke in us.” In turn, it is visual experiences of things “as-of” being red that causally mediate the acquisition of the sensory concept RED. To say that concept acquisition is causally mediated by experience is again not to say that concepts are learned by statistical abstraction over these experiences. Rather, Fodor likens concept acquisition to contracting the flu—it is something that just happens to us in consequence of having the right kinds of experiences.

    Sensory properties are members of a more general category of MDPs called appearance properties which we get locked to by way of appearance concepts including our old friends DOORKNOB and CARBURETOR. Like sensory properties, appearance properties are constituted by a psychological law that governs how “we lock to them in consequence of certain sorts of experience.” To quote Fodor at length (1998: 142):

    “The model, to repeat, is being red: all that’s required for us to get locked to redness is that red things should reliably seem to us as they do, in fact, reliably seem to the visually unimpaired. Correspondingly, all that needs to be innate for RED to be acquired is whatever the mechanisms are that determine that red things strike us as they do; which is to say that all that needs to be innate is the sensorium. Ditto, mutatis mutandis, for DOORKNOB if being a doorknob is like being red: what has to be innately given to get us locked to doorknobhood is whatever mechanisms are required for doorknobs to come to strike us as such.”

    I take it that the mechanism referred to here has something to do with what others here are calling the "innate hypothesis space,” or perhaps what Fodor (2008) refers to as “an attractor landscape.”

    ReplyDelete
  5. I agree that the Stainton refutation only works if you accept the premise [that the argument is based on abstruse reasoning] I don't for Katz - so I was as unconvinced as you are for Fodor. That's why I said we can do better.

    I think Fodor went wrong when he posed a false dilemma: it has to be either Quine's view or his. Quine's view has obvious problems [many here have pointed them out] therefore Fodor must be right. If Quine and Fodor were all the options, that would follow. But he has not ruled our other possibilities. Maybe Avery's view is a sophisticated version of Quine - you'd have to ask him. But I [and seemingly Jan Koster as well] think there are other possibilities which Fodor has not refuted...

    ReplyDelete
  6. A problem I'm having with this discussion is that Norbert has twice presented his point 1, point 2 sequence, where I happily agree with Norbert's Fodor on point 1, but not on point 2; then the discussion goes off in directions that seem unrelated to point 2, perhaps due to some crucial bit of ignorance on my part.

    But anyway, for colors, it's not enough to know that some red thing is called 'red', you also have to learn that pink and orange things are not 'red' in English (but do fall under the red-like concept in other languages), which can be done without either explicit negative evidence (maybe this should be called 'corrective evidence') or innate color concepts such as RED, if you have an appropriately structured property space & a Bayesian learner that notes which things are and are not called 'red' (the latter including pink ribbons and ripe oranges, but not ripe Jonathan apples).

    Perhaps Fodor would not have a problem with that, but he presumably would with somebody proposing to define 'persuade' along the lines of (Wierzbickian-style format, not because I believe in her anti-formalism, but because it's a good way to convey the sense of a proposed decomposition without requiring people to master any particular formalism, tho I've added indexes to nail a few things down):

    a) [X said something]_i
    b) Y didn't believe Z before this_i
    c) after this_i, because of this_i, Y did believe Z

    (a-c) seem to me to all be entailments of 'X persuaded Y that Z', where X is a person, such as Jerry Fodor (books and aspects of people's behavior ar different), but the attempted decomposition can be found deficient because 'persuade' applies to a narrower range of situations than the decomposition.

    So by the Fodorian view as I understand it, in accord with Norbert's point 2, the reason for this failure is that correct decompositions are nonexistent; this means that no matter how many additional clauses we add to our decomposition, either some clause will rule out some situation that 'persuade' covers, or 'persuade' will rule out some situation that the current collection of clauses covers.

    This may be, but it seems to me to be more constructive (in the everyday sense of the term) not to just assume that this is true, but to try to find what could be added to (a-c) to make it work properly. Especially because there appear to be many subtle differences between languages in what their terms for persuasion-like activities actually mean.

    That this is hard cannot be doubted, nobody has provided a list of correct concepts to use as primitives, and you have to look at substantial areas of vocab in multiple languages, including exotic ones where you don't know the culture very well, to make plausible hypotheses, but it doesn't seem to me to be at bottom any more dubious than the program of generative grammar.

    ReplyDelete
    Replies
    1. I’m sorry I didn’t quite catch your point(s) except that it’s related to pt 2. So I’ll try to reveal what I’ve got from these two lessons. It reads *if* learning is inductive, there must be some hypotheses space given beforehand. Unfortunately we don’t know how it looks like. Now for the sake of argument let’s assume that learning *is* inductive and there is an innate concept *red*. As far as I understand Jan Koster’s *interpretations* (see his comments to Fodor on Concepts, part 1), they are not necessarily compositions of the (basic) concepts and if so the combinations may be somewhat arbitrary and unimportant. For example, (the) R/red(s) can mean ‘wine‘, ‘dye’, ‘Cincinnati Reds’, ‘Manchester United‘, ‘Russian bolsheviks‘ and more. In this sense he rightly assumes, at social/cultural level, a constrained but virtually open hypotheses space which, however, is built above the constrained and closed space at the cognitive level.

      Though I don’t quite agree with Jan Koster in some aspects of his theory, I take it as a reasonable way to account for the apparent paradox of the hypothesis space.

      Delete
    2. I vaguely remember from my undergraduate Fodorian indoctrination that there are some positive empirical reasons to reject the Wierzbickian analysis. First, kids appear to acquire mid-level concepts before they acquire the primitive concepts which these are allegedly constructed from. Second, there is very little evidence for the existence of definitions from real-time processing. E.g., people find sentences with negation hard to process, but they don't have any trouble with concepts whose definitions would presumably involve negation (DENY, REJECT, etc.) So Norbert may have been understating the case a bit. Even if we ignore the fact that the definitions "leak", other problems remain. In the case of natural kind concepts, there are also the Kripke/Putnam arguments against definitionist analyses.

      But I think there's agreement here on the strategic picture. If you genuinely think that definitionism has a hope, then you don't have much to fear from Fodor's argument. At worst you'd have to accept that what you formerly thought was concept acquisition was really just a process of fixing beliefs regarding what certain words apply to.

      Delete
  7. @Alex: The processing arguments in the original 'Against Definitions' paper were effectively destroyed by the invention of the lambda calculus before they were ever made in the first place (Fodor appears to have been assuming some sort of Generative Semantics-style implementation), as Jackendoff effectively notes in Semantics and Cognition (not explicitly mentioning lambda-calculus, but it's a known workable 'packaging technique'. Perhaps there are better ones now, or some effective refutation of J's arguments? Acquisition is trickier, but children certainly master some of the primes, such as NO and WANT (and especially their combination!) from an extremely early age, and act as if they knew the many of the others, regardless of what kinds of word meanings they seemto know.

    The point of my post however is not really to claim that decomposition is workable (however fun it may be to try to do it), but that it's really no more dubious than generative grammar. And the Fodorian position is deeply uninteresting, at least for empirically minded people (perhaps there's more in it for philosophers) amounting to little more then the observation that there are problems, and that it might not work, which sensible people have always known anyway (for both projects).

    ReplyDelete
    Replies
    1. Yes, it's possible to reconcile decomposition with the available evidence. Kids have FURNITURE before they have TABLE, but they're careful not to show it. The denotations of words are pre-compiled "chunks" of semantic information and therefore easier to process because — I've always had trouble filling in the blank here, and the lambda calculus doesn't help. All Jackendoff offers is a vague analogy with the motor systems (p. 125 of Semantics and Cognition).

      The evidence can be reconciled with decomposition at a stretch, but it seems to favor other models of conceptual semantics. Something for the empirically-minded to consider.

      Delete
  8. NSM provides no reason to think anybody should have furniture before table, their natures are rather different ('furniture' is a collection of different kinds of things, made by people, in one place; 'tables' are kind of things made by people, with various further elaborations). Tables furthermore have a characteristic shape (you can draw generic table) that a Marr-style shape definition could accomodate, while furniture does not, so it's not surprising that children would learn it later. This is from Lexicography & Conceptual Analysis (1985), Cliff Goddard has a forthcoming paper about 'functional categories' such as weapons and furniture, there is probably some stuff in between.

    The point about the lambda calculus is that if 'kill' is '\yx.Cause(x, Die(y))' then the structure is stuck inside a package that can be manipulated exactly like an indecomposable transitive for syntax, neutralizing Fodor's original arguments. Perhaps there are better ones now? Appealing to meaning-postulates won't help, there's no reason to think that finding and looking up the meaning postulates triggered by 'kill' would be any quicker than looking inside the lambda-term when you need to extract the entailments. Also notice that when processing with defined terms, you don't have to make copies of them, but can manipulate pointers to them (until you need to look inside, at which point managing meaning postulates leads to the same kinds of problem, instantiating the postulates with references to the currently relevant participants).

    ReplyDelete
  9. I may not be tracking precisely with what’s at issue here. But here is my two-cent take on Fodor's take on definitions. The English word ‘water’ can be used to express the basic/atomic concept WATER, whose representational content (ex hypothesi) is the semantically primitive property of being water. Chemists, however, sometimes use the word ‘water’ to express the structurally complex concept [HYDROGEN & DOXIDE], whose content is presumably the complex property of being [hydrogen + dioxide] (where the ‘+’ here indicates set intersection; Fodor seemingly takes properties to be sets rather than universals). As it turns out, while the concepts WATER and [HYDROGEN & DOXIDE] are not content-identical, they are nevertheless coextensive. And as a complex concept, [HYDROGEN & DOXIDE] has a decompositional analysis and supports certain logical entailments (by way of conjunction elimination). In the reverse direction, we can say that “learning” [HYDROGEN & DOXIDE] involves a process of concept conjunction.

    So, if you want to say that ‘water’ is definable as ‘hydrogen dioxide’, I think Fodor would say: “You can say that, if you like.” But as Norbert points out, Fodor might prefer to call this a meaning postulate. In any case, Fodor’s view (strictly speaking) seems to be that for any primitive lexical item (“words,” roughly speaking), there is a corresponding “word-sized” primitive concept that can be associated with its meaning. Yet again this does not preclude the possibility that at some level of semantic description a particular speaker might associate a particular word-sound with a complex concept (say as a part of “belief formation”). To take a different example, autobiographically speaking (and after years of indoctrination) I associate ‘bachelor’ with the complex concept [ELIGIBLE & UNMARRIED & MALE], whose content is [eligible + unmarried + male], and where ELIGIBLE, and UNMARRIED, and MALE each presumably have their own analyses (in the sense indicated above).

    Notice: this is not to say that ‘bachelor’ *means* eligible unmarried male. Rather, if Fodor is right, ‘bachelor’ means bachelor, punkt! And this is to say that anyone who knows what the word ‘bachelor’ means, as such, is in possession of the primitive concept BACHELOR (where having BACHELOR does not entail having [ELIGIBLE & UNMARRIED & MALE]). Here again, different concepts, different contents; while coextensive (by assumption), one is both structurally and semantically complex whereas the other is syntactically simple and semantically primitive. This proposal generalizes for any putative definition.

    ReplyDelete
  10. There are at least two problems with definitions and the various forms of decomposition. The first problem, as discussed by Fodor and in the current thread, is that definitions are almost never exhaustive and quite right. A second problem is that it is hard, if not impossible, to give examples of true analyticity. Take the English word "pork", famously analyzed by Postal as MEAT FROM PIGS. This has always intrigued me because "pork" is one of the few examples of a word I remember of how I learned it as a child. The Dutch word is "varkensvlees", which is transparent as to the animal involved ("varken" = "pig"). When I was a child we had cans with "pork" imported from England and I picked up that word but didn't have the faintest idea that the meat was from pigs. It would be silly to say I didn't know the meaning of "pork" because I knew what it looked like, how it tasted and that it was meat. Only the pig was missing from the picture. So, MEAT FROM PIGS represents contingent knowledge, not something truly analytic.

    Chomsky has suggested that true analyticity is easier to find with verbs than with nouns (like "pork"), but I doubt it. Take the famous analysis under which "John killed Mary" entails "John CAUSED Mary TO DIE." Once more, the entailment is not a matter of necessity but dependent on one of the many theories of causation. Thus, according to certain Islamic thinkers, followed by some European nominalists in the Middle Ages, only God causes anything directly, which would make "John" a mere instrument in our example (see the interesting article "Occasionalism" on Wikipedia). I conclude from all of this that if you give a definition/parafrase/decomposition, you literally don't know what you get because the more primitive categories must be interpreted as well, which involves an open-ended set of background theories.

    In the meantime, I still don't see how the fact that inductive learning requires a pre-existing hypothesis space (something I agree with) forces us to jump to the further conclusion that this hypothesis space is populated by innate concepts. *Prima facie* it doesn't even apply to Fodor's original example ("miv" learned to be "red and round"). I must be missing something because, obviously, "red" is not innate but conventional, because based on a culturally sanctioned selection from the color spectrum. Perceptual options for the latter may be hard-wired, but selection leads away from innateness, almost by definition. For similar reasons, we don't say that the OV order of some language is innate, even if there is an innate hypothesis space that allows only two parameter settings, viz. OV and VO.

    ReplyDelete
    Replies
    1. Yes, I think this is one of the flaws in the argument: the step from saying that the hypothesis space {h1,h2, ...} is innate to claiming that all of the individual hypotheses are innate.

      This only makes sense if you are thinking in a P & P style way where you only have a small finite number of hypotheses, but not when the space is very general and infinite.

      To give an example from another domain that may help the intuition -- the Red Book standard for audio CDs gives the exact specification for a valid audio CD. But we wouldnt say that the Red Book contains my favourite ABBA compilation. So one can know the contents of the Red Book without knowing anything about ABBA.

      Delete
    2. I think [but could be mistaken] that one of Fodor's mistakes is to assume that learning/using language [LL] is the same [just maybe on a lesser level of sophistication] as philosophical analysis [PA] of language/concepts. So if decomposition/definition cannot work 'all the way' for the latter [which might be true] it can't work for the former [which seems false].

      If learning language really is the kind of beast Chomsky suggests, then Fodor's assumption would be plausible [but maybe not necessary?]. Given that people who work on language acquisition [and all kinds of examples like Jan and Alex mention above] have provided some evidence to suggest that the LL and PA might be rather different in nature, it seems that the ball IS in Fodor's court and he needs to show that we're doing [necessarily] the same [kind of thing] when we learn language as we do when we do deep philosophical concept analysis...

      Delete
    3. Fodor might be able to do us a favor be rewriting his material in the light of recent work on color terms by people like Mike Dowman, who seems to get interesting results by combining an innately structure perceptual space with a Bayesian learner, but no innate population of color concepts other than the four focal points, which are not always represented in vocabulary, but do seem relevant to behavior.

      Delete
    4. @Avery. I think this would be a nice example to illustrate Fodor's point. The Bayesian process can be understood a process of belief fixation regarding which range in the color space 'red' applies to. Clearly, the concept SUCH-AND-SUCH-A-RANGE-IN-THE-COLOR-SPACE had to have been available all along, or the correct hypothesis could never have been formulated.

      At this point Fodor's opponent may say, "ok, sure, call it belief fixation rather than concept acquisition if you like". This would be to concede Fodor's logical point while maintaining that ‘red’ can be decomposed. But then we have the familiar problem that it's deeply implausible to consider a mathematical statement of a range in the color space to be a definition of the word ‘red’. That might be what it is to be red, but it's not what it means to be ‘red’. Fodor's analysis captures this perfectly. Say that Dowman is 100% correct about the math involved (Fodor can happily concede this, I think). Then kids become locked to the mind-dependent property of redness via some Bayesean process, such that a reliable causal link is established between the property of redness and tokenings of RED. The cute thing about this analysis is that all of the fancy math can remain entirely subconceptual. On Fodor's account we're not forced to make the absurd assumption that kids acquire ATTRACTOR LANDSCAPE before they acquire RED. It's not so clear how to avoid this absurdity on the definitionist view, since the purported definition of RED/&lsquo'red’ makes reference to fancy concepts like this.

      (In principle, it's compatible with Fodor's view that kids lock on to the property of redness via fancy concepts like ATTRACTOR LANDSCAPE. That would make RED rather like a natural kind concept, where a property is locked on to via a complex theory.)

      Delete
    5. I dislike the term 'belief fixation' because it suggests that something permanent happens; once you have fixated the belief 'red' you're stuck with it for the rest of the life. But this is clearly NOT how we learn concepts. We learn bits and pieces. We make mistakes. Developmental psychologists tell us mistakes and correction [direct and indirect negative feedback] are much more common than we used to think.

      Take Jan Koster's example; he had incomplete knowledge of 'Pork'. Now there is a philosophically interesting debate if he knew the meaning of 'pork' at all before he knew it was 'meat from pig'. But there's no language acquisition mystery: one fine day Jan learned that in addition to what he already knew this stuff in the cans was 'meat from pork' and he 'updated' his belief. Do we have reason to believe this is not how kids first learn? Most of first language acquisition happens so early that we forget details. Sometimes parents remember an episode [my son used to insist 'Sally the camel has 5 house' because he knew what a house is but not what humps are. Took some effort and camel pictures to teach him]. And anyone who switched to a second language later in life has stories like Jan's.

      I think the absurdities Alex D. mentions arise only if we insist acquisition/concept fixation has to happen 'all in the brain'. Again the alternative is not "It's all from the input". It may be helpful to focus one one of the questions Chomsky posed as important for research but is often 'put aside' in these debates:

      (1) What is the knowledge or faculty of language?
      (2) How did this knowledge or faculty develop in the individual?
      (3) How is that knowledge put to use?
      (4) How is it implemented in the brain?
      (5) How did that knowledge emerge in the species? [Boeckx, 2009, p. 44 - I copied these because i have the file on my computer, they nicely summarize Chomsky's]

      There seems agreement among Chomskyans that we have answers [at least in principle, though not in detail] for 1 and 2 but that 3 [sometimes dubbed 'Descartes' problem'] is mostly a mystery. This may be true if we ask for a full account of creativity. But this does not mean we know nothing about how people put knowledge of language to use. IMHO it would be beneficial to look closely at early acquisition studies because [among other things] they show that how 'the knowledge is put to use' changes over time. Possibly shifting the focus away from 1 and 2 can 'dissolve' the seeming paradoxe [Fodor's view seems to have absurd consequences; the only opposing view seems to rest on absurd assumptions] ...

      Delete
    6. @Alex: so if Fodor found that to be an OK summary of his views, I won't quarrel with him about colors, in any deep way (but how is this different from anybody else but possibly Wierzbicka, for colors), and a similar story is probably workable for at least some other sensory properties.

      So one of the things that comes up next is what to do about more 'conceptual' words such as 'think', 'believe', 'persuade', which turn out to differ in many subtle ways from one language to the next; do we want to try to have some story about them, or just give up? In the former case, the Wierzbickians are at the very least collecting lots of relevant data; who else is doing even that?

      Delete
    7. [continuation of previous] about that corner of the vocab, and many others besides; other people such as Jackendoff have done and are doing useful things elsewhere, but the total mass of stuff collected by the Wierzbickians is getting impressive, in spite of all kinds of issues sensible people have with various aspects of the method.

      Delete
    8. @Avery I agree that Fodor's story is not that different from anyone else's story for colors. However, that's to be expected, because as Fodor said in the thing I quoted a while back, he's basically extending the empiricist story about sensory concepts to concepts in general (slogan: Quine minus the empiricism).

      I also agree that if word meanings don't decompose, that leaves us with the thorny question of what lexical semantics is about. To state the obvious, a lot will hang on whether it's possible to construe the decompositions people have come up with as something other than definitions.

      Delete
  11. There is no one-to-one correspondence between words in use and basic concepts. We can learn a bit from emerging sign languages. The signers might well have begun with gestures, eg. a downward spiral hand movement along a slope meaning “(it’s) rolling-down (the hill)“, which later became grammaticalized, eg. as a hand movement in circle (“to roll”) followed by a straight downward move (”down”). Of course, such changes in the lexicon leave the concepts intact.

    *Umarried* is in no case an elementary concept. Consider the etymology of the word bachelor. It used to be probably something like a disciple and servant of an experienced man and only later obtained the sense 'unmarried man'. *Red*, unlike *unmarried*, can be a basic concept, at least in principle.

    If we are to look for the basic concepts in the current lexicons we should perhaps search among the more abstract words rather than the most common and apparently simple ones. Now I’ll be just shooting. Consider visual objects. Children below 2 years have the concepts of circle and line, they can even draw them. Why would they need them if not for visual object classification and recognition? Objects can be decomposed into structure primitives, perhaps such as line, ark, oval, symmetry, branching, distinction, movement, bounded entity, perhaps living (or having unpredicted behavior) etc. Visual recognition proceeds from simple features to a complex structure. And it’s all done, all the computations are finished before the whole comes to our conscious mind. (This brings us close to the Quinne's translation problem with rabbit: what part of it does the word denotes?) What we are interested in, have to respond to, can give a name to is the emerged whole in the first place. Once more, it’s just shooting but it’s in accord with what Alex Drummond said above: less basic concepts come first (definitions are implicit in the recognition process) while the basic ones need a lot of analytic work. At the time a child learns the word *grandmother*, s/he is entirely unable to analyse it into primitives. A city person in fact may never find out that pork is somehow related to pigs.

    ReplyDelete
    Replies
    1. I happen to agree that there is no one-to-one correspondence between words and basic concepts. But Fodor, as I read him, would not, or at least not with respect to their meanings. And I’m not sure that appealing to etymology in defense of definitions constitutes an objection to Fodor (not terribly unlike arguing that concepts/words have definitions because there are dictionaries). I also agree (as would Fodor) that UNMARRIED is not primitive, and I indicated as much above. Moreover, it may be a brute fact that our concepts of, say, Spelke objects, and concepts of their attendant basic properties, are sui generis, and perhaps innate (in some robust sense of “innateness”). But this doesn’t show that the primitive concept SPELKE-OBJECT has a definition, or that it is metaphysically decomposable into structural primitives. It may be that object recognition involves building up a complex representations from simpler ones. But this doesn’t constitute an argument for definitions. As Fodor argues (to my mind convincingly) concept acquisition/possession does not depend on one’s ability to reliably recognize/sort its instances, as such.

      Delete
    2. My goal was not to argue with you but rather to compare *unmarried* and *red* as conceivable candidates on innate concepts. I agree with what you say; if I could I’d erase that paragraph of mine.

      Maybe the rest is silly as well. I find Koster’s arguments convincing. Yet there has to be something in the (innate) hypothesis space in the brain that our concepts are built upon. That is why I tried to reconcile Koster’s and Fodor’s theories by assuming that the *basic concepts* are somewhere between the primitive features (which are innate, biological) and what words are about (which is social/cultural). If it can’t work, I'm at my wit's end.

      Delete
    3. I have no particular quarrel either, nor even a vested interest in Fodor’s account being right. It’s just that I occasionally hear others say things that seem to contradict what I read. It is of course entirely possible that my reading of Fodor is skewed, and in which case I’d be much obliged if someone would step in and straighten me out.

      For the record, I also misspoke in an earlier post. Rather than a meaning postulate, I think Fodor would classify “water = hydrogen dioxide” (or ‘water’ = [HYDROGEN & DIOXIDE]) as a *scientific theory* of water. Unlike BACHELOR, DOORKNOB, RED, etc., what he calls the “real” concept WATER expresses a natural kind (a mind-independent property) which is acquired in virtue having a true theory of its chemical essence. Yet it’s again important to note that WATER is not *constituted by* the theory that facilitates its acquisition (i.e., that while [HYDROGEN & DIOXIDE] and WATER may be coextensive, they are not content-identical). On the other hand, Fodor sometimes speaks of BACHELOR as though it may be one of those rare concepts that actually does have a definition (though I don’t understand why he would want/need to concede even this much).

      Delete
    4. This comment has been removed by the author.

      Delete
    5. To be precise, water molecule is H2O (an oxygen atom and two hydrogen atoms) while “hydrogen dioxide” would be HO2. But it's irrelevant. I quite agree with you that what matters is the bulk water (which in fact contains small quantities of other chemicals as well as microorganisms.) Moreover, the concept is fuzzy - does it include ice, snow or water vapor? Sometimes. Sea water? Often. Btw, is 'fussiness' another innate concept?

      Delete
    6. Oops! I wonder if this means that I am not properly locked to H2O by way of a true scientific theory :).

      About the other bit, Fodor says that we can get locked to water in at least two different ways. One is by having a true theory of what water is, chemically speaking. On the other hand, our ordinary/pre-theoretical notion of water corresponds to what Fodor would classify as an appearance concept, which we acquire in virtue of the way representative samples of water "strike our kinds of minds as being." The latter would be compatible, I believe, with having a concept of water, call it WATER-PT (for pre-theoretical), that ranges over water samples containing impurities, or that occur in nature is alternate forms.

      Delete
  12. The question whether to subsume "ice" under the concept "water" and the question whether to include H2O both illustrate the traditional idea of philologists (that I have adopted so far) that word meaning is contextual. "Context" does not only include the immediate context of use but also the background theories that are part of the interpretation process. Like in the "pork" case discussed above, it appears that background theories differ from person to person and from era to era (e.g., water pre- and post- its analysis as H2O). We can communicate not because we know THE meaning of a word but because our interpretations and background theories overlap. Given the flexibility of symbolization, it cannot be said once and for all what concept a word denotes (and/or labels). Given the fundamental fact that word meaning is in flux, depending on ever changing historical/cultural contingencies, it is a mystery to me how anybody can think that word meaning is a matter of innateness, individual psychology or biology.

    People who agree with me at this point might think that nevertheless our culturally determined concepts are based on "deeper" primitives, properly called "concepts", that are innate (or in some Platonic heaven) after all. It seems to me that that is a metaphysical position that cannot be confirmed or disconfirmed by empirical means.

    Note that such primitives would not be attached to words or any kind of physical sign, but be *freischwebend* in some sense. I tend to think that there are no physical or biological counterparts to such free concepts and that concepts only exist in the empirical world as context-bound interpretations of physically realized signs (in the Saussurian sense). I have always found it suspect that if you ask somebody to give an example of a concept you usually get a "visible" expression, often in the guise of an ordinary word spelled with capital letters.

    If this is true, externalization is not some kind of afterthought but part and parcel of the fabric of language (undermining the concept of I-language, at least for concepts). There are perhaps some exceptional situations in which concepts are relatively *freischwebend*, for instance in the often short period before a new word or expression is invented, but even then there seems to be some agentive focusing with supportive physical realization, for instance in the form of imagery in working memory that is functioning as a proto-sign.

    ReplyDelete
    Replies
    1. I agree with virtually everything you say except for a minor quibble and I have a clarification question:

      You say "People ... might think that nevertheless our culturally determined concepts are based on "deeper" primitives, properly called "concepts", that are innate (or in some Platonic heaven) after all. It seems to me that that is a metaphysical position that cannot be confirmed or disconfirmed by empirical means".

      I think reference to 'Platonic heaven' should be avoided because it implies some 'place' where these 'concepts' are - and this is exactly what linguistic Platonists like Postal or Katz deny - for good reason. As for confirmability: i am sure you're aware of Katz' comment that IF we take rational realism seriously at all, we cannot expect to confirm the existence of abstract objects with the same methods we use to confirm existence of concrete objects. But this does not imply there is no way to confirm/disconfirm.

      Lets start with disconfirmation: the reason K&P adopted Platonism is that they believe natural languages NL have some properties [P] that cannot be accounted for bei either nominalism or mentalism. Platonism is necessary to account for these P. Now if one can show either [i] that P are NOT [necessary] properties of NL or [ii] that pace K&P mentalism can account for all P, then Platonism is not necessary to account for all properties of NL [might still be the case abstract objects exist, but who would care?]. Given that K&P have specified a bunch of P it would make a lot of sense for someone like Chomsky or Fodor to show how they can be accounted for without Platonism [and thus disconfirm linguistic Platonism].

      Confirmation is similarly indirect: if it remains the case that some P cannot be accounted for by any non-Platonist framework, then we have good reasons to believe Platonism is true [at least provisionally until maybe someone comes along and shows otherwise]. This BTW is the same logic as David used when we talked about Tomasello: There are some properties language undoubtedly has, Tomasello cannot account for them, David can - hence David's framework is superior. Same logic would seem to require to adopt Platonism if Merge generates sets but sets cannot be accounted for in a biolinguistic framework, only in a Platonistc one. [I am not intimately familiar with Katz's account of semantics but seem to remember he claimed some P there as well [which would be a good target for disconfirmation].]

      Now to my question: You say: "Note that such primitives would not be attached to words or any kind of physical sign, but be *freischwebend* in some sense"
      maybe just my German gets in the way here: by 'freischwebend' you do not mean an actual [physical] object floating unsupported in space but rather some 'object' that has no physical properties?

      Delete
    2. I am a moderate Platonist (hence sympathetic to K & P's ideas) in that I am skeptical about naturalistic approaches to universals and intentional objects. "Platonic heaven" was used metaphorically here, actually meaning Platonism "without topos ouranos" in the sense of Hermann Lotze and Gottlob Frege in 19th-century Germany. Rational inquiry into "Platonic" fields is possible, with confirmation or disconfirmation of hypotheses, but without the same possibilities as to the ultimate reality of their objects of inquiry. By "freischwebend" concept I mean a concept existing without being the interpretation of a physically realized sign. In Saussurian terms: I don't believe there are *signifiés* without *signifiants*.

      Delete
    3. I get stuck with K&P & Platonism almost immediately on the basis that I construe indefinite recursion etc as properties of a mathematical model, useful in the way that a mathematical model of a car engine might be useful for certain purposes even if it left out friction, wear, and the finiteness of the fuel supply, and so falsely predicted that the engine would run for ever. Nobody would get their knickers in a twist about this if they were thinking about car engines, so what's the big deal with grammar?

      Delete
    4. There are a few problems with you analogy. First, we know independently of the car model that real cars are subject to friction, wear, finiteness of fuel supply, etc. - so in some cases we can indeed abstract away from those properties. For language we are not in the position yet that we know all it's properties, [note that Chomsky makes the point repeatedly, i have cited it in here; http://ling.auf.net/lingbuzz/001592] - so how can we possibly know if the idealizations of the model are justified?

      Next, in some cases the abstracting away you mention is okay but if you want to model a fuel efficient car [say before building an expensive prototype] you would hardly leave friction out of your model or pretend to have unlimited fuel supply. Instead you'd try to calculate these factors as precisely as you possibly can. So if language is an organ, located in the brain, you want to model brain properties as closely as you can. We may not know too much at this point but one thing we know for sure: brains are finite and their output can only be finite. So why would you use a model that does not represent the one property we definitely know brains have; finiteness? Why would we not use a model with finite recursion, seeking out what the upper bound is?

      Finally, remember that Chomsky claims there is no difference between knowledge of language and language. Postal has collected a lot of quotes in which Chomsky is very clear about this: http://ling.auf.net/lingbuzz/001569 So to get back to your analogy: there is no difference between the model of the car and the car - the model IS the car. Chomsky has also insisted repeatedly that there is no 'longest expression' LE such that you cannot find an expression E that has at least one more element. If this is the case and if these expressions are the output of a biological organ you are not talking model but magic...

      There are more reasons to take Platonism at least seriously. I really recommend to read the K&P paper http://ling.auf.net/lingbuzz/001607 but you have to read all of it, and maybe more than once and with a mind open enough to allows you to at least consider they could be right...

      Delete
    5. Reading (or maybe rereading) K&P will take some time, but I don't think the points you mentioned at the beginning invalidate what I'll call the 'model-based' view, especially for people who don't really care about whether or not to call the version of the model with or without length limitations, center-embedding limitations, breath-pause modelling, etc. 'English', or 'a possible language' or not. You always have to adjust what's in a model to suit the purpose.

      An intermediate case between language and car engines might be the skeleto-muscular system, where you can have useful models (for some purposes) that didn't account for when bones would break and muscles rip loose from their attachment points.

      Delete
    6. @Jan Koster. You say: “People who agree with me at this point might think that nevertheless our culturally determined concepts are based on "deeper" primitives, properly called "concepts", that are innate (or in some Platonic heaven) after all. It seems to me that that is a metaphysical position that cannot be confirmed or disconfirmed by empirical means.”

      The logic and “lexicon” of cognitive processes has to be more general / more abstract than our everyday logic and lexicon, because it (or much of it) has had to work in very different environments, such as those of our apish ancestors, hunters-gatherers or IT experts (a good reason to stick to your *interpretations*). Maybe we do have sets in our heads after all ;).

      Delete
    7. @Avery: It seems we are talking past each other. I am not trying to say that models should account for everything. We know bones are the kinds of things that can break. So a model that has this knowledge built in as background assumption is fine, even if there is no bone-breaking modelled [because the purpose of the model is to focus on some other property of bones, like how they grow in a child]. So that is not the point. The point is that a model that presupposes that bones have a property they could not possibly have [say being tie-able into a knot] would be a useless model of bones. [but it may be a fun model for some rubber kids toy of octopusman].

      I am also not denying that there are many properties of language that can be modelled by Chomsky's 'Merge model'. So your point about center embeddings etc. is not at issue. What is at issue is that such a model is not a model of a biological system, like a human brain. Chomsky compares the language organ to the visual system, or the digestive system, or the immune system. All these systems have some properties they share with other biological systems and some that are specific to them - that make the digestive system different from the immune system - a different organ. And biologists can tell you specifically what some of the differences are.

      Now take language. Can you name one biological property of the language faculty that makes it the language faculty [as opposed to the visual system]? The only candidate at offer at the moment is the operation Merge. So what ARE the biological properties of Merge? i am not aware of a single biological property of Merge that Chomsky [or anyone else] has ever specified. But vkodytek [apologies if this is not your name] is right: we would literally need to have sets in our heads because they are the output of Merge, which is also in our heads. Now it would follow that, given that for example the integers are routinely modeled as sets, we have actual integers---not mental constructs, not representations, not impressions, but numbers themselves in our heads. These are the kinds of absurd consequences biolinguistics has when taken seriously. Of course we can have knowledge of numbers in our heads [and knowledge of sets] but not the numbers themselves...

      Delete
  13. @Christina: I suspect we are, but am so far finding it interesting to try to figure out where. The Platonist line goes back at least to the early eighties, so can't be an issue involving Merge, but was, iirc, intended as a general approach to all kinds of grammatical theorizing (I do recall spending a fair amount of time thinking about Katz's 1981 book, Postal and Langendoen's _The Vastness of Natural Language_, etc. without being persuaded, also some of Kim Sterelny's writings). I thought at the time that I thought of my (LFG) grammars as defining abstract objects, but suspected that it would be a mistake to call what they defined 'English' (or 'Icelandic'), & didn't think I needed a rigorous theory of what these labels ought to be applied to, as long as people seemed to agree at least roughly about what attempted sentences seemed like 'grammatical' or 'ungrammatical' examples in each language, relative to meanings. 'Roughly' here implies, among other things, that the status of trillion-word sentences or massive center embeddings would be irrelevant.

    I'm also not committed to there being any 'faculty of language' with a different basis than any other human abilities, Everett's idea of a 'platform' that supports language as well as other stuff (visual art? planning a route through the supermarket (vaguely inspired by Penni Sibuns 1990s'-era ideas about language generation)? tying your shoes? whatever the most satisfactory theory of all this stuff and more seems to suggest). There is presumably some kind of biological endowment there, but not necessarily one specific to language, and calling linguistics a branch of biology makes about much sense to me as calling herpetology a branch of physics (possibly even less, due to less knowledge of useful intermediary disciplines).

    Merge & sets bring up another collection of problems, since, although set theory is useful for formalizing things, it's not the only way, there is, for example, category theory. I'm too ignorant and bad enough at math to formalize Merge with category theory, but Alex Clarke has written papers with people who might be able to do it (Bob Coecke's theoretical physics+linguistics group at Oxford, working with Lambek's pregroup grammars the last time I looked). The advantage of category theory is that the results of merge could have exactly the properties you want them to have, and without any extraneous ones due to the fact of being set-theoretical objects.

    ReplyDelete
    Replies
    1. A couple of points. I am not sure if you meant your first comment [that Platonism can't be an issue involving Merge because Platonism goes back to the 80s] seriously. I hope not but if you [or anyone reading your reply] did, picture this: X moved 1990 into an apartment and signed a rental contract that specified 'no pets allowed'. X claims that does not apply to the new puppy he got his kids 2012 for x-mas because the puppy didn't even exist in 1990. Maybe in some goofy sitcom that is considered
      funny but.... The relevant part of the K&P critique applies to Merge because Merge is the kind of thing that specifies a biological organ generates an infinite output. [K&P deal with the common rejoinder 'capacity to generate' and 'potentially infinite' and show why these don't work]. So because Merge is the kind of thing K&P deal with the criticism applies even though Merge had not been invented [conceptualized?] in the 80s. And if this still does not convince you, have a look at Postal 2009 [ http://ling.auf.net/lingbuzz/001608 ] or Postal 2012 [ http://ling.auf.net/lingbuzz/001569 ]. There Merge is specifically mentioned as target of the incoherence criticism. As i said i hope you were joking..

      Now in the remainder of your reply you seem to say 2 different things: there are some approaches to linguistics that are not susceptible to K&P criticisms and there are some specifics of K&P and L&P that may not apply even to biolinguistics. Lets look at these in turn

      1. Currently we only talk about the incoherence charge [IC] and only about Chomsky's Biolonguistics [B] [not entirely irrelevant here note that Postal 2012 is very clear that IC is independent of Platonism being right]. K&P do not claim EVERY linguistic theory is target of IC, only B and those that are in the relevant sense like B. So you have to look at THIS claim. Saying that IC does not refute a linguistic theory T [say Tomasello's] is irrelevant here. K&P never claimed it does. So if you can find a theory that is not targeted by IC that much the better for you but it does not rescue Chomsky's B.

      2, K&P and L&P make several independent arguments. IC is one of them. If one of these argument is fatal for biolinguistics it does not matter if the others fail. Picture a hunter who has 3 bullets in his gun and aims at a lion. If the first bullet kills the lion it does not matter if the other two miss or if the gun jams after the first shot was fired. No one would say the hunter did a bad job killing the lion because it took him only 1 bullet. Postal [2009, 2012] claims IC is the kind of argument that 'kills' the biolinguistic lion. If you want to refute this claim it is pointless to focus on the other 'bullets' in the K&P and L&P guns. You need to show that it is literally possible to have sets in our heads.

      Regarding your last paragraph: I am not familiar enough with Alex C's theory to evaluate whether it is in the relevant sense like B or not - maybe he can speak to that

      Delete
    2. Don't confuse me with Stephen Clark at Cambridge! He is the one that has done some stuff with the distributional semantics/category theory/pregroup grammars.

      I don't see any problem with using abstract or infinite objects in a model for a concrete finite system. So if you have a real physical finite pocket calculator it is reasonable to say that it adds numbers; and one might want to add the caveat (but only up to 10^69 and within certain accuracies). So in my own work, I say that we have grammars where the nonterminals are, say, infinite sets of strings. Now of course when I implement this in a computer program and run it, the nonterminal in the grammar is a very real thing -- a physical configuration of the memory of the computer. But I would describe it, in my paper using some abstract mathematical description. I don't think this is problematic.

      There are plenty of things I object to in biolinguistics, but this isn't one of them.

      Delete
    3. Thanks for this. Note i did not claim knowledge about your view [though i think there should be a law banning any more people with the name 'Clark' entering cogsci/linguistics :)

      Now about your comment. What is really at issue after all the analogies and models are left at the gate is: what is the ontology of natural language sentences? You say grammars specify infinite sets of strings, so these are abstract things.Then you talk about something physical, 'a configuration of the memory of the computer'. Now on your view, are natural language sentences like [i] the physical things, aspects of memory of a computer, or [ii] are they like the abstract things?

      If sentences are [i], then they are finite in number, no matter what any model says. If this is your view it is not subject to IC but it is also very different from Chomsky's [B]. If your view requires sentences to be [ii], then you reject biolinguistics in addition to the 'plenty of things' also for ontological reasons.

      Delete
    4. I think you are making unnecessary difficulties on this type/token distinction. Scientists are generally interested in the repeated properties of events which are abstracted into various types. So we have volcanoes of various types. Vulcanologists study volcanoes in the abstract, and particular individual volcanoes. Is the subject matter of the discipline of vulcanology the study of volcano tokens or volcano types?

      One could claim that vulcanology is clearly incoherent because it fails to make this crucial ontological distinction -- but I doubt that the annual vulcanology congress does (or should) spend any time discussing this vexed question.

      The finiteness/infiniteness thing seems an innocent idealisation on a line with frictionless surfaces, perfect vacuums, incompressible fluids etc., Gaussian distributions and the rest of it.

      Delete
    5. Thanks for the introduction to volcanology. It seems though, you use the term 'abstract' here in 2 meanings that are better kept separate:

      1. abstract [= platonic] objects, not existing in time and space and independent of any concrete objects and
      2. abstract concepts based on salient properties of concrete objects and used as 'stand in' or 'representation of said objects in say theory construction.

      People who study volcanoes are interested in both: [i] properties of individual volcanoes and [ii] a maximally general account of the properties of all volcanoes. Now any laws or principles which result from [ii] will hold of the physical volcanoes. So we can abstract way from certain properties of particular volcanoes, e.g. the fact that one is in Italy and another in Hawaii. But in so abstracting one will not turn volcanoes into abstract objects which are indestructible, not located in time
      and space, etc. The physical volcanoes are not affected by such abstractions and obediently remain sitting in Italy or Hawaii in all their glory.

      Now if sentences are of the same kind as volcanoes [e.g. physical objects, here biological things], doing some abstracting when theorizing about sentences will not turn them into abstract objects, and if they are abstract objects, nothing can turn them into biological things.

      The type token distinction in linguistics is between abstract objects and performances of them, the latter having of course space-time coordinates. Tokens will be in the standard case noises, lasting a few seconds, occurring in places on
      earth. Abstracting from them won't get one anywhere near abstract objects [types or sentences]. Saying that the sentence 'Chomsky is a famous linguist' is literally located in your brain appears very odd. Saying Monte Vesuvio is located in Italy on the other hand is not odd at all.

      Whether the finiteness/infiniteness is an innocent idealization seems rather questionable. Vulcanologists probably would object to a model that predicts infinite lava output? And Chomsky objects to idealizations that stay much closer to known physical properties of brains when he criticizes connectionists for "abstracting radically from the physical reality, and who knows if the abstractions are going in the right direction? (Chomsky, 2012, p. 67). But compared to a jump from finitude to infinity connectionist models stay extremely close to the physical reality...









      Delete
    6. Chomsky says some ridiculous and indefensible things, and I have no intention of supporting his attacks on connectionism, or his attempts to define what are or are not appropriate generalisations or methodological assumptions. I think however that the assumption that the set of possible sentences is infinite, is a perfectly reasonable "methodologically expedient counterfactual idealisation", to be augmented by some suitable performance model that will limit the class of sentences that will actually be uttered by a human to some very large but finite set.


      I don't understand your point about abstract tokens. On one view we have sentences (types) like "It is snowing", and particular utterances (tokens) like me saying it is snowing at 7:am on Christmas eve. The utterance itself is a complex event or sequence of events which consists among other things of certain neural phenomena (in my brain and in the brains of the persons I was talking to), certain physical phenomena in my articulatory apparatus, and certain acoustic phenomena etc etc.

      I don't see where the Platonism comes in here: if there are any platonistic objects then they are by definition not causally involved in the events that we are talking about.

      Delete
    7. Ooops, how did I get 'Alex' and 'Stephen' mixed up? Good thing this is a blog not an article. Returning to K&P, I should have said 'focussed on' or something like that, not 'involving', which would have made my claim be the banal platitude it was supposed to be.

      Taking one of the arguments against Conceptualism, the Veil of Ignorance (K&P 524-525), I find it highly unconvincing because I don't think that either my or even Chomsky's brand of psychologism commits us to claiming to be able to get complete and certain knowledge of postulated mental mechanisms by examining only grammatical behavior. All that is required from the grammatical behavior is useful hints. So if the grammatical behavior suggests an infinite number of sentences, but neurophysiological study reveals a vast list, that's just the way it is, tho we will want a theory of why the behavior was so misleading.

      Chomsky however seems to think he can extract far more in the way of conclusion from the hints than I think is warranted, but that's a different issue. Some kind of rather flakey pushdown store is probably about as much as I would want to commit to.

      Delete
    8. You misunderstood my motivation for citing Chomsky here. I believe the attack on connectionism is not justified. But, nevertheless, I think the general point he was making; that we should not be "abstracting radically from the physical reality, [especially if we don't know] if the abstractions are going in the right direction" is valid.

      Further, if your research concerns only utterances themselves, the complex event or sequence of events which consists among other things of certain neural phenomena in brains of speakers or listeners, certain physical phenomena in their articulatory apparatus, and certain acoustic phenomena etc etc., then Platonism is not relevant to it [nor has any Platonist i know claimed it would be].

      It is also not at issue whether the the assumption that the set of possible sentences is infinite, is a "methodologically expedient counterfactual idealisation" but if it is a GOOD idealization. Take the vulcanologist. If he abstracts away from all friction, air resistance and a bunch of other physical factors, when calculating how large an area should be evacuated in case of an eruption, his model might predict the entire earth because nothing will stop the flow of the lava. A model that takes the relevant physical factors in account on the other hand will give at least a pretty good estimate for the 'general case' and will be helpful for actually occuring individual cases. So it does matter what is abstracted away and what isn't: in linguistics just as in vulcanology. And because it matters it matters what kinds of things sentences are. If it's tokens all the way, then a model that is based on an infinite set of possible sentences seems like a very bad model. Why not use a model of a finite collection of sentences? You may have good reasons for preferring the infinite set model but you have not revealed what they are.

      Delete
    9. I think the argument against finite cutoff-ism is that the facts appear to be statistical in nature. We don't really know that some bizarre cult won't merge wherein it is seen as useful to have somebody intone "God created a proton, and then another one, and then another one, ...", nor that there won't be life extension technologies that let this go on for a very long time, & we can't predict how long this bizarre civilization might survive into the heat-death of the universe, etc. etc. So the choice of finite cutoff is arbitrary, even tho the probability of the ulta-long sentences ever being produced or consumed is extremely low.

      One area where K&P and me are perhaps singing the same song is the idea that linguistics ought to pay more attention to e-language, on the basis that there can be properties of the performances that are independent of the implementation. In phonetics for example it appears to be the case that certain kinds of phonetic effects can be produced by multiple means, which tend to all get used. But this doesn't mean that linguistics doesn't have anything to say about implementation.

      Delete
    10. I'd oppose Rens Bod-style 'flatness' on similar grounds, from what I've seen so far. The Greeks for example sometimes embed genitive possessors inside NPs 'the secret the(G) sea(G) passages' (the secret passages of the sea), but these are almost always no more complex than a definite article followed by a noun (in about 800pp of poems, I haven't noticed any that are more complex than that). This looks good for flat structure, but when you produce a parodic elaboration of the above such as 'the secret the(G) Australian(G) immigration(G) passages', it seems to be OK. Recursive PSG predicts that this option exists, even if it is hardly ever used in practice, and this prediction appears to be correct.

      Delete
    11. Which means, finishing the thought, that eventually one should show up in a natural text (and not in one clearly imitating AG syntax, where there are at least 2 examples of complex embedded possessors that show up in the traditional grammar handbooks). The Veil of Ignorance exists, but it is not absolute, and is thinner in some places than in others.

      Delete
    12. @Christina ( the flat commenting is not ideal for this sort of discussion), I more or less agree with Avery on why infinity is a good idealisation.
      Take another example -- say you have a statistical model of the height of tigers. You might well choose a model that is a Gaussian distribution with mean 1.5m and s.d. 20cm. (I just made that up). This would be pretty standard -- but it does give a nonzero probability to tigers that are 10km high as well as to tigers that are -1m high. But this doesn't mean that the model is incoherent -- it just means that it doesn't work well outside a certain range.

      I also don't think that I have to show that the infiniteness assumption is a good idealisation (though I could, along the lines of how Avery is arguing). If I get the argument right, then Postal is claiming that this assumption leads to incoherence. To refute this claim it is enough just to show that it is an idealisation of a standard type, even if it is not a good idealisation. Take vulcanology again -- if we assume that lava is frictionless, it may lead to incorrect conclusions and bad predictions, but it doesn't make the theory incoherent.

      I think it is a fair criticism to say that insufficient attention has been paid to performance models in the generative program, and I find the neglect of the computational models, as I was arguing with Norbert about, incomprehensible and unjustifiable.

      As to why it is a good idealisation -- with the mathematical tools that we deploy standardly (e.g. phrase structure grammars) -- where we have a binary grammaticality decision, it is hard, as Avery says, to draw a principled line around a finite set.
      So it is expedient because it makes the maths and the models easier.
      There are other tools (.e.g Pullums/Rogers style model-theoretic syntax where it might be easier).

      Delete
  14. @Christina, thanks for the hint on March 3, 2013 at 5:00 PM related to my name. I didn't think about it Now I have persuaded my account to display my name.

    When I said “Maybe we do have sets in our heads after all”, I meant it in this sense: If it’s true, as Fodor claims, that our concepts are innate – and they have to be *in some sense* so and they can’t be otherwise - then what our everyday concepts are based on has to be more general and more abstract then them. As a consequence, if sets are somewhere at all, they are in our heads.

    What’s the alternative? We can go and say Fodor’s view is crazy because it’s impossible that we have an innate concept of carburetor. But it’d be a vulgar, tabloidish interpretation.

    ReplyDelete
  15. I agree with Alex and Avery that the infinity assumption is innocent and cannot be held against biolinguistics, no matter what Postal says in this regard. A finite cut-off point for sentence generation just doesn't make sense. Infinity assumptions about finite physical systems are not unusual. According to classical mechanics, for instance, unimpaired rectilinear movement goes on forever. Finiteness is caused by other, intervening factors. Similarly, there is no highest number that your pocket calculator can compute, at least not in principle. In practice, there are limitations of time, memory, etc. Perhaps it is useful, as in intuitionist mathematics, to make a distinction between virtual infinity (as in certain algorithms, accepted by intuitionists) and actual infinity (as in more standard Platonistic interpretations of set theory, like Cantor's). What is implemented in finite physical devices is virtual infinity, not actual infinity.

    I have other reasons to be skeptical about biolinguistics. I agree, for instance, more with Christina about the difference between linguistics and vulcanology. Unlike vulcanoes, words and sentences are human tools after all, and it seems impossible to characterize human tools in terms of physics or biology. Such objects have what philosophers call "derived intentionality". So far, the source of the intentionality in question has been entirely elusive.

    ReplyDelete