Comments

Wednesday, April 3, 2019

Dueling Fodor interpretations

Bill Idsardi

Alex and Tobias from their post:

"The ground rule of (Fodorian) modularity is domain specificity: computational systems can only parse and compute units that belong to a proprietary vocabulary that is specific to the system at hand."

and

"Hence Hale & Reiss' statement that nothing can be parsed by the cognitive system that wasn't present at birth (or that the cognitive system does not already know) appears to be just incorrect. Saying that unknown stimulus can lead to cognitive categories everywhere except in phonology seems a position that is hard to defend."

I think both parties here are invoking Fodor, but with different emphases. Alex and Tobias are cleaving reasonably close to Fodor 1983 while Charles and Mark are continuing some points from Fodor 1980, 1998.

But Fodor is a little more circumspect than Alex and Tobias about intermodular information transfer:

Fodor 1983:46f: "the input systems are modules ... I imagine that within (and, quite possibly, across)[fn13] the traditional modes, there are highly specialized computational mechanisms in the business of generating hypotheses about the distal sources of proximal stimulations. The specialization of these mechanisms consists in constraints either on the range of information  they can access in the course of projecting such hypotheses, or in the range of distal properties they can project such hypotheses about, or, most usually, on both."

"[fn13] The "McGurk effect" provides fairly clear evidence for cross-modal linkages in at least one input system for the modularity of which there is independent evidence. McGurk has demonstrated that what are, to all intents and purposes, hallucinatory speech sounds can be induced when the subject is presented with a visual display of a speaker making vocal gestures appropriate to the production of those sounds. The suggestion is that (within, presumably, narrowly defined limits) mechanisms of phonetic analysis can be activated by -- and can apply to -- either acoustic or visual stimuli. It is of central importance to realize that the McGurk effect -- though cross-modal -- is itself domain specific -- viz., specific to language. A motion picture of a bouncing ball does not induce bump, bump, bump hallucinations. (I am indebted to Professor Alvin Liberman both for bringing McGurk's results to my attention and for his illuminating comments on their implications.)" [italics in original]

I think this quote deserves a slight qualification, as there is now quite a bit of evidence for multisensory integration in the superior temporal sulcus (e.g. Noesselt et al 2012). As for "bump, bump, bump", silent movies of people speaking don't induce McGurk effects either. The cross-modal effect is broader than Fodor thought too, as non-speech visual oscillations that occur in phase with auditory oscillations do enhance brain responses in auditory cortex (Jenkins et al 2011).

To restate my own view again, to the extent that the proximal is partially veridical with the distal, such computational mechanisms are substantive (both the elements and the relations between elements). The best versions of such computational mechanisms attempt to minimize both substance (the functions operate over a minimum number of variables about distal sources; they provide a compact encoding) and arbitrariness (the "dictionary" is as small as possible; it contains just the smallest fragments that can serve as a basis for the whole function; the encoding is compositional and minimizes discontinuities).

And here's Fodor on the impossibility of inventing concepts:

Fodor 1980:148: "Suppose we have a hypothetical organism for which, at the first stage, the form of logic instantiated is propositional logic. Suppose that at stage 2 the form of logic instantiated is first-order quantificational logic. ... Now we are going to try to get from stage 1 to stage 2 by a process of learning, that is, by a process of hypothesis formation and confirmation. Patently, it can't be done. Why? ... [Because] such a hypothesis can't be formulated with the conceptual apparatus available at stage 1; that is precisely the respect in which propositional logic is weaker than quantificational logic."

Fodor 1980:151: "... there is no such thing as a concept being invented ... It is not a theory of how you acquire concepts, but a theory of how the environment determines which parts of the conceptual mechanism in principle available to you are in fact exploited." [italics in original]

You can select or activate a latent ability on the basis of evidence and criteria (the first order analysis might be much more succinct than the propositional analysis) but you can't build first order logic solely out of the resources of propositional logic. You have to have first order logic already available to you in order for you to choose it.

References

Fodor JA 1980. On the impossibility of acquiring "more powerful" structures. Fixation of belief and concept acquisition. In M Piattelli-Palmarini (ed.) Language and Learning: The Debate between Jean Piaget and Noam Chomsky. Harvard University Press. 142-162.

Fodor JA 1983. Modularity of Mind. MIT Press.

Fodor JA 1998. Concepts: Where Cognitive Science went Wrong. Oxford University Press.

Jenkins J, Rhone AE, Idsardi WJ, Simon JZ, Poeppel D 2011. The Elicitation of Audiovisual Steady-State Responses: Multi-Sensory Signal Congruity and Phase Effects. Brain Topography, 24(2), 134–148.

Noesselt T, Bergmann D, Heinze H-J, Münte T, Spence C 2012. Coding of multisensory temporal patterns in human superior temporal sulcus. Frontiers in Integrative Neuroscience, 6, 64.

11 comments:

  1. When discussing phonology and Fodor, we gotta remember Wendy Sandler's wonderful critique of his claims in the light of sign language phonology.

    ReplyDelete
  2. I know researchers have tried to argue that the existence of sign-language "phonology" suggests that phonology is not substance laden, or that it is not modularly organised. However, it depends on the details (for the moment, I will stay away from Bill's view that substance mean "partially isomorphic" that I am very sympathetic to, and which is qualitatively different from other claims about substance, I think.). I always found the claim about substance-freeness based on sign-language phonology tough to swallow.

    It is not clear to me that sign language phonology is similar to spoken language phonology. Particularly, the fact that even deaf children vocally babble before manually babbling (the latter, if they are lucky enough to have signing parents), suggests that there are actually two different externalisation systems. I think this does require more analysis, however, and simply observing the existence of two externalisations, with potentially similar organization, doesn’t argue for one position over the other, in my opinion.

    Also, thanks for the Sandler reference - I hope to read it more carefully.

    ReplyDelete
    Replies
    1. I agree. Sandler is not claiming substance-freeness in this critique, merely critiquing Fodor's claims about "modules" in light of phonology. What makes all of her work so profound is that she consistently examines sign phonology on its own terms, and lets the connections to spoken phonology and amodality emerge (and emerge they do).

      Sign phonology cannot imply substance freeness just by existing. Serious analysis of sign languages as normal languages showcases the extremely subtle representational and computational tradeoffs in how specific modalities realize phonology, which functions as a theory of "what's the substance and what's it doing?" I have a much more detailed paper on this, but it's in review so I guess we'll have to wait to hear more.

      Delete
  3. I'd like to comment on two issues raised by Bill (in two pieces du to length):
    1. multisensory integration,
    2. the impossibility of inventing concepts.

    Multisensory integration
    There is no doubt indeed that a given computational system can take inputs from several other systems. The McGurk effect is one case in point, synesthesia another, and there are surely many more outside of language. But I don't quite understand in which way this is an argument against domain specificity (understood as "computational systems can only process their own vocabulary (i.e. just one vocabulary, not two or more)"). Interface devices translate one vocabulary into another. In a situation where a module X receives information from only one single source Y, translation will be Y → X. In case module X receives information from two sources Y and Z, there will be a translation Y → X and Z → X. All that does not change anything to the fact that module X will only process its own X-vocabulary. Translation is there to assure that, no matter how many different input modules/vocabularies there are.

    If the bare existence of multisensory integration meant that modules are able to process distinct vocabularies, the whole idea of modularity would dissolve. The competing business, connexionism, holds that computation is colourless and not specific to any task. If you don't want to end up with that but accept that modules can process more than one vocabulary, you will have to draw a red line between an area of vocabularies that can be co-processed by a given module, and another where vocabularies cannot be co-processed. Hard to imagine how such a red line could be drawn.
    Also, if modules can process more than one vocabulary, interfaces are superfluous. The existence of interfaces, i.e. translating devices, follows from domain specificity as defined above. In phonology, by the end of the 70s everybody was deceived of the SPE interface theory based on boundaries: there was a whole boundary zoo #, +, &, ! etc. Boundaries are the result of translation: morpho-syntactic structure is translated into vocabulary that can be parsed by phonology. Being fed up with that conceptually and empirically, one move was to throw away translation and to say that phonology can directly read morpho-syntactic categories. This is called Direct Syntax: early instantiations include Pyle (1972), Rotenberg (1978), Hyman (1978: 459), the climax of the movement was in the 80s with Kaisse (1985, Odden (1987). Prosodic Phonology opposed Direct Syntax with the argument of domain specificity, which was called Indirect Reference. I recap this piece of the history of phonology in my book on the history of interface theories (Scheer 2011: §§134, 407).
    If you accept that modules can process more than one vocabulary, you don’t need to bother with cumbersome translation: Direct Syntax will do the job. And there is no reason for Vocabulary Insertion either: the only difference in that the lexicon here is a little bigger than for boundary information where units to be inserted in the phonology boil down to # and a few others, total below a dozen.
    Jackendoff does computational translation (as opposed to translation through a lexicon, list-based), and his interface processors thus need to be able to process two distinct vocabularies, input and output. He calls that bi-domain specificity. That dissolves interfaces for the same reason: if modules can process more than one vocabulary, no interface device is needed. I have a paper on this underway, I'll post the link to the pre-print when it's done.

    ReplyDelete
    Replies
    1. McGurk, etc. isn't an argument against domain specificity. It's an argument for lawful information transfer between modules. For example, with McGurk effects, timing information in the auditory system must be combined with timing information in the visual system to yield a multisensory temporal code. And then multisensory temporal "conclusions" must be passed back across the interface to yield temporal predictions in the auditory and visual systems ("predictive coding"). The three systems may code time and temporal relations differently, but there has to be lawful translations (quasimorphisms) between them (such that a relation like "more time" is preserved across the interfaces).

      Another important example is efference copies sent from motor systems to perceptual systems to allow for correction of self-movement inside the perceptual system. Motor control does involve proprioceptive and sensory feedback.

      Delete
  4. Now you are right, Bill, that the Fodor quotes you mention show that Fodor was rather permissive on how large the domain of domain specificity is. But it seems to me that his conclusion is along the lines of what I say above: McGurk is domain-specific, the domain being "language". That is, if "language" is a single computational system, it works with its own unique vocabulary, and the visual input (Mc Gurk) as much as the auditory input (regular speech waves) are translated into this "language"-vocabulary.
    Jackendoff says this:
    "'Mixed' representation[s] should be impossible. Rather, phonological, syntactic and conceptual representations should be strictly segregated, but coordinated through correspondence rules that constitute the interfaces." Jackendoff (1997: 87ff)

    Impossibility of inventing concepts
    The position you are illustrating with the Fodor quote, Bill, is the classical dualistic / cartesian take (also Hale & Reiss' on innate substance-laden features): the sensory input is only noise and garbage that the mind cannot make sense of except it knows in advance what will come its way. I subscribe to dualism / the cartesian position, but that does not mean that all and everything is innate and genetically coded, or that genetically coded things are domain-specific. Critics of Chomsky have said for decades that what is genetically coded is domain-general, rather than specific to language (they buy U, but not G of UG). Categorization is one of these domain-general capacities that nobody doubts the cognitive system has. Chomsky has largely swung in to this position in the minimalist attempt to empty UG as much as possible: third factor (= domain-general) explanations are the best, he says.
    The question is thus, it seems to me, which cognitive categories exactly come into being through domain-specific innate endowment (plus environmental stimuli), and which ones exist due to domain-general cognitive mechanisms. Hale & Reiss say phonological categories are of the former kind, Alex and I say they belong to the latter family.
    The generality of Fodor's take according to which you can't invent concepts seems untenable to me. In our original post Alex and me have quoted documented cases where the cognitive system does make sense of things (electric signals) that for sure it has never encountered before, in both production (bionic prostheses) and perception (bionic eye). That's positive, hard evidence and I cannot see how in its face it could be held that the cognitive system is unable to invent new concepts.
    I am not sure about "concepts", though: in the quote Fodor talks about making hypotheses in order to get from stage one to stage 2 in a learning process (and his example, like yours, Bill, in another post, is about math). What I am thinking of is muuuuch simpler: there are no hypotheses involved and no learning. The only thing that happens is the association of a real-world item with a cognitive category. This category is empty but discrete (as all other cognitive categories), and it will be associated to, say, a wave length in colour perception, or labiality in the case of melodic primes in phonology. Anything and its reverse that is present in the real world can be associated to an (empty and discrete) cognitive category.
    I can't see that this is in any practical or conceptual way impossible: the association at hand is categorization. The successful association of a real-world category X with a cognitive category A produces an entry in the interface lexicon: |X ↔ A|.

    Finally, I am not sure to understand the workings of your own take, Bill (but I'll read more): small lexicon, primes and computation both substantive. Is there a published version of this?

    ReplyDelete
  5. Hyman, Larry. 1978. Word Demarcation. In Joseph Greenberg (ed.), Universals of Human Language, Vol 2, 443-470. Stanford: Stanford University Press.
    Jackendoff, Ray. 1997. The Architecture of the Language Faculty. Cambridge, Massachusetts: MIT Press.
    Kaisse, Ellen. 1985. Connected Speech. The interaction of Syntax and Phonology. London, New York: Academic Press.
    Odden, David. 1987. Kimatuumbi phrasal phonology. Phonology 4. 13-26.
    Pyle, Charles. 1972. On Eliminating BM's. In Paul Peranteau, Judith Levi & Gloria Phares (eds.), Papers from the eighth regional meeting of the Chicago Linguistic Society, 516-532. Chicago: Chicago Linguistic Society.
    Rotenberg, Joel. 1978. The Syntax of Phonology. Ph.D dissertation, MIT.
    Scheer, Tobias. 2011. A Guide to Morphosyntax-Phonology Interface Theories. How Extra-Phonological Information is Treated in Phonology since Trubetzkoy's Grenzsignale. Berlin: Mouton de Gruyter.

    ReplyDelete
  6. Hi Tobias, thank you for all of your comments.

    I believe strongly in modularity, but I think you need to cast a wider net for your examples of connecting systems; the lexicon is unusual in many ways, and shouldn't serve as the paradigm across cognition.

    Much of my concern with the notions of "substance free" and "arbitrary" come from looking at other examples, outside of language, from cognitive psychology and neuroscience on such notions as "transduction". Sensory transduction is a case that we know quite a lot about. As Fodor (1983:45) says, "the character of transducer outputs is determined, in some lawful way, by the character of impinging energy at the transducer surface; and the character of the energy at the transducer surface is itself lawfully determined by the character of the distal layout." A good example of this is the Jeffress code for auditory localization in birds (Hickok, G. 2014. The Myth of Mirror Neurons. p 116-119; much more detail is provided in Ashida, G., & Carr, C. E. 2011. Sound localization: Jeffress and beyond. Current Opinion in Neurobiology, 21(5), 745–751.).

    The notions "substance free" and "arbitrary" disregard (or at least drastically downplay) the *lawful* nature of such sensory and perceptual processes. And we also know that such systems are not learned through associationism. I'm all for modules, but the interfaces between them are not a free-for-all. In my view (and this is the standard view in sensation and perception) there's a lawful process that translates basic concepts from one side of the interface (e.g. inter-aural arrival times at the ears) into basic concepts on the other side of the interface (the place code) that preserves the important information structure across the interface, a quasimorphism (more ITD = more in the code). The translation is not complete or perfect, so it's only partially veridical.

    I'm not sure what you mean by "there are no hypotheses involved and no learning. The only thing that happens is the association of a real-world item with a cognitive category." Associationism is a type of learning, as is category formation.

    I agree with you with respect to syntax, semantics and phonology (small changes in vowel quality can lead to large changes in meaning, "bean"/"bin"; there must be a large dictionary). But such cases don't seem very common across cognition so it's a bad idea to lump them all together.

    ReplyDelete
  7. very interesting post.this is my first time visit here.i found so mmany interesting stuff in your blog especially its discussion..thanks for the post!Jogo para criança online
    play Games friv
    free online friv Games

    ReplyDelete
  8. Great bloog here! Also your website loads up very fast! What web host are you using? Can I get your affiliate link to your host? I wih my website loaded up as quickly as yours
    카지노
    바카라

    ReplyDelete
  9. Greate article. Keep writing such kind of information on your page.
    Im really impressed by your blog.
    토토
    토토사이트

    ReplyDelete