Comments

Saturday, February 4, 2017

Gallistel rules

There is still quite a bit of skepticism in the cog-neuro community about linguistic representations and their implications for linguistically dedicated grammar specific nativist components. This skepticism is largely fuelled, IMO, by associationist-connectionist (AC) prejudices steeped in a nihilistic Empiricist brew.  Chomsky and Fodor and Gallistel have decisively debunked the relevance of AC models of cognition, but these ideas are very very very (very…) hard to dispel. It often seems as if Lila Gleitman was correct when she mooted the possibility that Empiricism is hard wired in and deeply encapsulated, thus impervious to empirical refutation. Even as we speak the default view in cog-neuro is ACish and that there is a general consensus in the cog-neuro community that the kind of representations that linguists claim to have discovered just cannot be right for the simple reason that the brain simply cannot embody them.

Gallistel and Matzel (see here) have deftly explored this unholy alliance between associationist psych and connectionist neuro that anchors the conventional wisdom. Interestingly, this anti representationalist skepticism is not restricted to the cog-neuro of language. Indeed, the Empiricist AC view of minds and brains has over the years permeated work on perception and it has generated skepticism concerning mental (visual) maps and their cog-neuro legitimacy.  This is currently quite funny for over the last several years Nobel committees have been falling all over themselves in a rush to award prizes to scientists for the discovery of neural mental maps. These awards are well deserved, no doubt, but what is curious is how long it’s taken the cog-neuro community to admit mental maps as legit hypotheses worthy of recognition.  For a long time, there was quite a bit of excellent behavioral evidence for their existence, but the combo of associationist dogma linked to Hebbian neuro made the cog-neuro community skeptical that anything like this could be so. Boy were they wrong and, in retrospect, boy was this dumb, big time dumb!

Here is a short popular paper (By Kate Jeffery) that goes over some of the relevant history. It traces the resistance to the very idea of mental maps stemming from AC preconceptions. Interestingly, the resistance was both to the behavioral evidence in favor of these (the author discusses Tolman’s work in the late 40s. Here’s a quote (5):

Tolman, however, discovered that rats were able to do things in mazes that they shouldn’t be able to do according to Behaviourism. They could figure out shortcuts and detours, for example, even if they hadn’t learned about these. How could they possibly do this? Tolman was convinced animals must have something like a map in their brains, which he called a ‘cognitive map’, otherwise their ability to discover shortcuts would make no sense. Behaviourists were skeptical. Some years later, when O’Keefe and Nadel laid out in detail why they thought the hippocampus might be Tolman’s cognitive map, scientists were still skeptical.

Why the resistance? Well ACism prevented conceiving of the possibility.  Here’s how Jeffery put it (5-6).

One of the difficulties was that nobody could imagine what a map in the brain would be like. Representing associations between simple things, such as bells and food, is one thing; but how to represent places? This seemed to require the mystical unseen internal ‘black box’ processes (thought and imagination) that Behaviourists had worked so hard to eradicate from their theories. Opponents of the cognitive map theory suggested that what place cells reveal about the brain is not a map, so much as a remarkable capacity to associate together complex sensations such as images, smells and textures, which all happen to come together at a place but aren’t in themselves spatial.

Note that the problem was not the absence of evidence for the position. Tolman presented lots of good evidence. And O’Keefe/Nadel presented more (in fact enough more to get the Nobel prize for the work). Rather the problem was that none of this made sense in an AC framework so the Tolman-O’Keefe/Nadel theory just could not be right, evidence be damned.[1]

What’s the evidence that such maps exist? It involves finding mental circuits that represent spatial metrics, allowing for the calculation of metric inferences (where something is and how it is from where you are). The two kinds of work that have been awarded Nobels involve place cells and grid cells. The former involve the coding of direction, the latter coding distance. The article does a nice job of describing what this involves, so I won’t go into it here.  Suffice it to say, that it appears that Kant (a big deal Rationalist in case you were wondering) was right on target and we now have good evidence for the existence of neural circuits that would serve as brain mechanisms for embodying Kant’s idea that space is a hard wired part of our mental/neural life. 

Ok, I cannot resist. Jeffery nicely outlines he challenge that these discoveries pose for ACism. Here’s another quote concerning grid cells (the most recent mental map Nobel here) and how badly it fits with AC dogma (8):[2]

The importance of grid cells lies in the apparently minor detail that the patches of firing (called ‘firing fields’) produced by the cells are evenly spaced. That this makes a pretty pattern is nice, but not so important in itself – what is startling is that the cell somehow ‘knows’ how far (say) 30 cm is – it must do, or it wouldn’t be able to fire in correctly spaced places. This even spacing of firing fields is something that couldn’t possibly have arisen from building up a web of stimulus associations over the life of the animal, because 30 cm (or whatever) isn’t an intrinsic property of most environments, and therefore can’t come through the senses – it must come from inside the rat, through some distance-measuring capability such as counting footsteps, or measuring the speed with which the world flows past the senses. In other words, metric information is inherent in the brain, wired into the grid cells as it were, regardless of its prior experience. This was a surprising and dramatic discovery. Studies of other animals, including humans, have revealed place, head direction and grid cells in these species too, so this seems to be a general (and thus important) phenomenon and not just a strange quirk of the lab rat.

As readers of FL know, this is a point that Gallistel and colleagues have been making for quite a while now and every day the evidence for neural mechanisms that code for spatial information per se grows stronger. Here is another very recent addition to the list, one that directly relates to the idea that dead-reckoning involves path integration. A recent Science paper (here) reports the discovery of neurons tuned to vector properties. Here’s how the abstract reports the findings:

To navigate, animals need to represent not only their own position and orientation, but also the location of their goal. Neural representations of an animal’s own position and orientation have been extensively studied. However, it is unknown how navigational goals are encoded in the brain. We recorded from hippocampal CA1 neurons of bats flying in complex trajectories toward a spatial goal. We discovered a subpopulation of neurons with angular tuning to the goal direction. Many of these neurons were tuned to an occluded goal, suggesting that goal-direction representation is memory-based. We also found cells that encoded the distance to the goal, often in conjunction with goal direction. The goal- direction and goal-distance signals make up a vectorial representation of spatial goals, suggesting a previously unrecognized neuronal mechanism for goal-directed navigation.

So, like place and distance, some brains have the wherewithal to subserve vector representations (goal direction and distance). Moreover, this information is coded by single neurons (not nets) and is available in memory representations, not merely for coding sensory input. As the paper notes, this is just the kind of circuitry relevant to “the vector-based navigation strategies described for many species, from insects to humans (14–19)— suggesting a previously unrecognized mechanism for goal-directed navigation across species” (5).

So, a whole series of neurons tuned to abstracta like place, distance, goal, angle of rotation, and magnitude that plausibly subserve the behavior that has long been noted implicates just such neural circuits. Once again, the neuroscience is finally catching up with the cognitive science. As with parents, the more neuro science matures the smarter classical cognitive science becomes.
Let me emphasize this point, one that Gallistel has forcefully made but is worth repeating at every opportunity until we can cleanly chop off the Empiricist zombie’s head. Cognitive data gets too little respect in the cog-neuro world. But in those areas where real progress has been made, we repeatedly find that the cog theories remain intact even as the neural ones change dramatically. And not only cog-neuro theories. The same holds for the relation of chemistry to physics (as Chomsky noted) and genetics to biochemistry (as Gallistel has observed). It seems that more often than not what needs changing is the substrate theory not the reduced theory. The same scenario is being repeated again in the cog-neuro world. We actually know very little about brain hardware circuitry and we should stop assuming that ACish ideas should be given default status when we consider ways of unifying cognition with neuroscience.

Consider one more interesting paper that hits a Gallistel theme, but from a slightly different angle. I noted that the Science paper found single neurons coding for abstract spatial (vectorial) information. There is another recent bit of work (here) that ran across my desk[3] that is also has a high Gallistel-Intriguing (GI) index.

It appears that slime molds can both acquire info about their environment and can pass this info on to other slime molds. What’s interesting is that these slime molds are unicellular, thus the idea that learning in slime molds amounts to fine tuning a neural net cannot be correct. Thus whatever learning is in this case must be intra, not inter-neural.  And this supports the idea that one has intra cellular cognitive computations. Furthermore, when slime molds “fuse” (which they apparently can do, and do do) the information that an informed slime mold has can transfer to its fused partner. This supports the idea that learning can be a function of the changed internal state of a uni-cellular organism.
This is clearly grist for the Gallistel-King conjecture (see here for some discussion) that (some) learning is neuron, not net, based. The arguments that Gallistel has given over the years for this view have been both subtle, abstract and quite arm-chair (and I mean this as a compliment). It seems that as time goes by, more and more data that fits this conception comes in. As Gallistel (and Fodor and Pylyshyn as well) noted, representational accounts prefer certain kinds of computer architectures over others (Turing-von Neumann architectures). These classical computer architectures, we have been told, cannot be what brains exploit. No, brains, we are told repeatedly, use nets and computation is just the Hebb rule with information stored in the strength of the inter-neuronal connections. Moreover, this information is very ACish with abstracta at best emergent, rather than endogenous features of our neural make-up. Well, this seems to be wrong. Dead wrong. And the lesson I draw form all of this is that it will prove wrong for language as well. The sooner we dispense with ACism, the sooner we will start making some serious progress. It’s nothing but a giant impediment, and has proven to be so again and again.


[1] This is a good place to remind you of the difference between Empiricist and empirical. The latter is responsiveness to evidence. The former is a theory (which, IMO, given its lack of empirical standing has become little more than a dogma).
[2] It strikes me as interesting that this sequence of events reprises what took place in studies of the immune system. Early theories of antibody formation were instructionist because how could the body natively code for so many antibodies? As work progressed, Nobel prizes streamed to those that challenged this view and proposed selectionist theories wherein the environment selected from a pre-specified innately generated list of options (see here). It seems that the less we know, the greater the appeal of environmental conceptions of the origin of structure (Empiricism being the poster child for this kind of thinking). As we come to know more, we come to understand how rich is the contribution of the internal structure of the animal to the problem at hand. Selectionism and Rationalism go hand in hand. And this appears to be true for both investigations of the body and the mind.
[3] Actually, Bill Idsardi feeds me lots of this, so thx Bill.

41 comments:

  1. That was an interesting update, thanks. It seems to me that AC will probably only disappear once the idea that synapses provide the brain's basic memory mechanism is finally being thrown out. There's also been some interesting developments in this respect recently, which are quite compatible with Gallistel's ideas on the subject matter (as well as my own musings): http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002572

    ReplyDelete
  2. The work you mentioned (in note 2) on the immune system was directly inspired by Generative Grammar -- see Niels Jerne's 1984 Nobel lecture: https://www.nobelprize.org/nobel_prizes/medicine/laureates/1984/jerne-lecture.pdf

    ReplyDelete
  3. Ah, Tolman--one of my favorites of all time! I like to say that *he* really started the cognitive revolution, because he showed that you actually could peek inside the behaviorists' black box.

    I'm puzzled by this take on the history, though. Tolman's results were a problem for radical/antitheoretical behaviorists of the Skinner/Watson sort (and maybe just Skinner). Other sorts of behaviorism, such as John Staddon's Theoretical Behaviorism, seem have no trouble with Tolman.

    Furthermore, when you refer to the "cog neuro" people, I presume you're not referring to the field of cognitive neuroscience, where O'Keefe and Nadel's work has been influential for decades; that's partially how O'Keefe and the Mosers won the Nobel Prize. The general view is still that the hippocampus is involved in "binding" (which might or might not be associationism; I'm unsure what you mean by the term), but is *also* involved in mapping.

    As for this:

    "the cog-neuro community [believes] that the kind of representations that linguists claim to have discovered just cannot be right for the simple reason that the brain simply cannot embody them."

    No, not in my experience. Rather, the problem is that linguists have not *shown* how the brain embodies them. Cog-neuro people obviously want both cog and neuro. That exists for vision, spatial navigation, memory, etc., but not in linguistics.

    The gap becomes obvious if you teach this to students. With vision, you can show the neural basis of edge detection and so forth. If you mention the LAD, hands go up: How is that embodied in the brain? How does it work? Where is it? How do you know it's there? It was a bit easier with P&P, but even then: How does the brain encode, say, the pro-drop parameter? After the collapse of P&P, and Chomsky's subsequent retreat into what Barbara Scholz called the "rocks and kittens" version of the POS, there's nothing.

    The other part of the problem is that Chomsky's work doesn't seem terribly useful if you want to understand practical matters such as child language acquisition and how it can go wrong. Dorothy Bishop wrote a great blog post about that.

    ReplyDelete
    Replies
    1. So we can agree about Tolman, at least as reported by Jeffrey. I cannot argue with you over history, as I don't know it. However, the Jeffrey reconstruction comports well with Gallistel's reconstructions and so I am happy to buy her account. What you say does not actually contradict her reconstruction, so far as I can tell. She observes that it took quite a while for Tolman's maps view to gain purchase in the CN community despite the interesting evidence in tis favor. Why? Her culprit is ACism. Does this make sense? Sure, ACers have long been suspicious of abstract of the kind that define physical space. Places and metrics are hard to "perceive." Indeed, Rationalists like Kant thought that had to be built in to make perception possible (See Gallsitel and Matzel). It seems that the O'Keefe et al work vindicates this view, though, if Jeffrey is right it took quite a while for the CN world to buy this. It took a Nobel or two to cement the possibility (amazing what large moneyed gifts will do for the convictions of a research community).

      Now to the Chomsky stuff: you note Bishop's post. And I would encourage everyone to take a look at this for I can think of no better single place where all the inanities of anti-Chomsky writing are so succinctly assembled. Let me just point out one: Talking about "colorless green ideas" and the autonomy of syntax thesis Bishope notes the following:

      "From this, it was a small step to conclude that language acquisition involves deriving abstract syntactic rules that determine well-formedness, without any reliance on meaning. The mistake here was to assume that an educated adult's ability to judge syntactic well-formedness in isolation has anything to do with how that ability was acquired in childhood."

      Dumb, dumb, dumb, and ignorant. Clearly Bishop hasn't the faintest idea what the point was. From the very start, Chomsky has taken syntax and meaning to be intricately linked. Nothing he has said implies that they meaning and form don't leverage one another in learning. Nothing. The point is that syntactic relations cannot be reduced to semantic ones. And nothing Bishop says gainsays this. Nor could it because it is trivially true.

      Sadly the rest of the post is as ill informed. And the consequences you've drawn concerning the utility of Chomsky's work for acquisition is similarly off base. There is lots of interesting stuff on child language that builds on it (and that I have discussed in other posts). For example, work by Crain, Lidz, Thornton, Yang, among many others. I understand you are more moved by the criticisms (and that is your right). IMO, Bishop, Scholtz, Tomasello, have no idea what they are talking about. I've tried to make good on some of this claim in various posts on FoL. If you are interested, take a look.

      Delete
    2. When I expressed my puzzlement about the history, I was referring to your take, not Jeffery's, specifically the claim that "what is curious is how long it’s taken the cog-neuro community to admit mental maps...." That claim doesn't appear in Jeffery, who focuses on the implications of cognitive maps for *behaviorism* (which I think you are calling ACism). I (still) have no idea what you could possibly mean by "cog-neuro" here. O'Keefe, Nadel, and the Mosers are all cog-neuro, in the standard use of the term, and many, many people have built on their work (see Maguire's work on the human hippocampus and navigation for an example). So when you write that "[it's] amazing what large moneyed gifts will do for the convictions of a research community," you've got it backwards; the convictions preceded, and in fact led to, the Nobels. (They're not bestowed by angels, you know.) Jeffrey, by contrast, gets this right when she highlights the research in 1984 and 2005, all of which preceded the Nobel.

      So I think we agree on two other things: (1) Jeffery's article provides a good overview, and (2) you don't know the history.

      As for the putative "general consensus" (of which you provide no examples, and which I've never encountered), Jeffery's article is illustrative: People are convinced when there is both cognitive evidence and neuroscientific evidence for an idea. When one of those is missing, it's less convincing. An absence of neuroscientific evidence is particularly problematic; without it, it's hard to know whether people are discovering something real or just building pretty castles in the air.

      As for the developmental stuff, I'm less interested in hashing out colorless green ideas than in the other empirical findings--for instance, Tomasello (I think) has found that kids don't seem to set a parameter; they get the parameter right in familiar sentences, but get it wrong in novel ones, and this changes slowly--not in the manner of a parameter being set. More to the point, I happen to know quite a lot of people who work on child language acquisition and child language disorders, and I've asked them about the relevance of Chomsky's work. There are quite a few opinions. None was positive. I'll spare you the list, unless you want it; perhaps the most cogent one came from a colleague who observed that "speculations about universal grammar/LAD/common language learning abilities are prima facie irrelevant to kids who don't learn language in the usual way."

      Delete
    3. You are right. We agree on the essentials. As for where we disagree, I doubt we will change one another's minds. I suspect we don't and won't find the same studies compelling. Where we can agree is that more evidence is always better than less and so all would lime to see behavioral and neuro evidence converge. This is doable in some areas more easily than others. So we can plug electrodes i to rats and bats and other animals to discover things about visual maps and their neural bases. Language does not present the same opportunities. Thus neuro evidence will necessarily be more indirect. But such exists. Dehaene, Poeppel, and others have found neural signatures of grammatical operations. This evidence is not without problems, but given the constraints, it is interesting. But I doubt you will find this persuasive given the quality, IMO, of the work that you value.

      Btw, you might take a look at Lidz's discussion of some of these themes in LOT within the past several months. He reviews Tomasello's latest salvo.

      Delete
    4. I cut this out to save space in the previous post, but for what it's worth, I do agree that it'll always be harder in linguistics because you don't have a model organism on which you can conduct invasive research. No way around that, I'm afraid.

      Delete
    5. @Steven: An absence of neuroscientific evidence is particularly problematic; without it, it's hard to know whether people are discovering something real or just building pretty castles in the air.
      Maybe I'm construing your statement more widely than it was intended, but this strikes me as a very peculiar point of view. Even in computer science we don't find clear hardware-software correlates. What is the hardware equivalent of a priority queue, where in that circuitry do I find binary heaps or evidence for a quick sort algorithm? Those are extremely simple computational ideas, and they're already very far removed from anything on the hardware level. Even if the brain is much more modular than a general purpose computer (which is debatable since computers have very specalized circuitry, too), why would wetware be exempt from the general rule that the more abstract the computation, the harder it is to detect a concrete physical instantiation? And most of human cognition looks very abstract to me.

      Delete
    6. Maybe so, but I'm a little leery of brain-computer analogies, and in any event, we *can* find physical instantiations, as we did with spatial navigation, memory, visual perception, etc.

      I do agree that you're not likely to find physical correlates of things like, I don't know, artistic creativity or compassion (at least, nothing beyond "Hey, this part of the brain is more active!," which is almost useless). And to touch on one of my bugbears, the attempt to find physical correlates for mental illness has largely been a failure even though mental illness obviously exists. Will the same thing happen for language? Who knows?

      Delete
  4. @StevenP: your case might be better served if you didn't rely solely on Chomsky's popular works, secondary sources, and ill-informed blog posts (which also rely on his popular works...etc).

    To take just one example, you've regularly referred to Chomsky's supposed "retreat to a rocks and kittens version of UG", citing Barbara Scholz. There is no such thing, as you (perhaps) would know if you had attempted to take seriously anything beyond YouTube and the blogosphere.

    ReplyDelete
    Replies
    1. Well, I'm not sure what Chomsky's "popular" works, lectures, etc., are for if they're not to provide insights about his views on language. I mean, maybe The Science of Language was just intended to make a quick buck--interview transcriptions can be a maximum-profit minimum-effort sort of thing--but my default belief is that people mean what they say when they write.

      As for secondary sources, blogposts, and the like, I cite them when I believe they have useful points, insights, etc. Useful points can emerge anywhere (if you believe that blog posts are inherently useless, why are you here, and why don't you criticize Norman's purported reliance on Jeffery?). In the past, I've also cited papers by Pullum, Dabrowska, Tomasello, and others, if that's more useful.

      If there's a set of Chomsky's work that I'm supposed to heed and a set I'm supposed to ignore, I'd appreciate it if someone could spell it out for me. A list of apocrypha will suffice.

      Delete
    2. Do you expect an interview with Stephen Hawking to give you an accurate representation of his work in physics? One should always be careful to separate scientific models from the intuitions that drive them. Interviews are about the latter and thus are more likely to derail a scientific debate than to sharpen it, at least if one doesn't already know the technical machinery. Just think about all the folk misconceptions about quantum theory, quantum computation, etc. You don't want to base your evaluation of physics on that stuff.

      I'm not insinuating that this is the extent of your linguistic knowledge, but I agree with doonyakka that the pudding is in Chomsky's formal proposals, not the simplified bullet point versions you find in his more philosophical/streamlined writing.

      My impression is that the issue is particularly bad with Chomsky in that his interviews often contain passages that I don't think can be interpreted correctly unless you already know the technical background. But maybe my impression is biased --- he's the only person reasonably close to my area of expertise who gets to talk big picture stuff for a general audience, so if something doesn't quite fit it won't fly under my radar.

      Delete
    3. @StevenP: What Thomas said.

      Imagine if I went onto the comments section of a prominent Physics blog, armed with my reading of Richard Panek's fascinating "The 4% Universe" and a few anecdotes about having spoken to academics in neighbouring fields, and attempted to convince physicists that the Standard Model should be thrown out entirely because, in the real sciences like neuroscience, a discrepancy like the one between the SM's predictions and observational cosmology would long ago have doomed the theory. But maybe things are done differently in physics than in the sciences.

      At best, I would be ignored.

      Although this is an extreme example, it is analogous and its absurdity demonstrates the slight crankishness of what you're trying to accomplish. You may find good reasons to argue that the entire generative grammar (GG) project is misguided, but you need to understand it first. And, just as I don't understand the Standard Model well enough to criticise it, despite having read dozens of popular physics books, I suspect that you don't (yet) understand GG's methods or findings well enough to dismiss it in the manner you're attempting.

      Delete
    4. Well, I didn't claim that it should be "thrown out entirely"; I think I originally wrote that, when a theory disagrees with data, the theory should be modified, constrained, *or* discarded. Usually it's the first two, which is what's happening with the Standard Model.

      If you'll indulge me, let me try a different approach. My field is the neuroscience of memory. I teach a variety of courses; most necessarily cover language and linguistics. When I read into and cover the problem of invariance, language disorders, the Wernicke-Geschwind model and its later revisions by Dronkers and others, semantic memory and category learning, the Shaywitzes' work on dyslexia, child language acquisition, etc., I'm faced with a literature that works in a familiar way--the data are collected in ways that are are familiar to me, terms are carefully and clearly defined, theories and models are firmly grounded in data though not limited to them, and so forth. So it's easy to integrate with everything else.

      Then there's Chomskyan linguistics, generative grammar, and the like, which I sorta feel that I should cover because people are quite passionate about it. But every time I look into it, it seems to work in a very different way. Terms seem to have ever-shifting definitions, the types of data (and their use) are largely unfamiliar, research methods seem oddly non empirical, theories and models seem to be revised and outright discarded without acknowledgement or good reason (what happened to deep vs. surface structure and P&P?), and the rhetorical style is more reminiscent of philosophy than of science. I'm left with a lot of verbiage in which I understand each individual word, but not the whole, and when I *do* think I understand and ask someone to confirm that understanding, the response is invariably that I've completely misunderstood, perhaps deliberately. And I'm far from alone.

      Moreover, the work seems to be of no relevance to my students, most of whom are interested in psychology, biomedicine, medicine and allied health, and biology.

      So after quite a long time at this, I'm inclined to leave generative grammar and Chomskyan linguistics out of my coverage of language and linguistics (except for a few points of history). I'd be more interested the particulars of GG, etc., if I were more convinced that there was a "there there," so to speak. That didn't happen when I dug into the LAD literature, and I'm reluctant to waste more time unless there's a better promise of a payoff.

      Delete
    5. Yes, different fields have different methodology. Even within a small, highly specialized field like computational linguistics you'll find very different ideas about goals, methodology, data, and standards of evaluation. Is that really surprising to you?

      And yes, many things that your average generative syntactician cares about will be largely irrelevant to students from psychology, biomedicine, medicine, biology. That's again not surprising, just like you don't usually learn much measure theory in a statistics class.

      If your complaint is just that generative grammar is too far removed from your primary research interested --- neural wetware --- nobody would be particularly puzzled. I think what has rustled a few people's jimmies here is the following: Just because something is not immediately applicable to your research does not mean that it is fundamentally flawed, and you shouldn't be too ready to believe every criticism that's out there just because parts of it align with your perception.

      Delete
    6. Well, at the risk of repeating what I wrote in my last post, I'm not surprised *that* there's a difference. I'm surprised at *what that difference is*. Neuroscience, psychology, cog neuro, whatever use a variety of research methods, including computational modeling, case studies, behavioral studies, imaging, etc., but these tend to cross subfields. They do cross into certain areas of language and linguistics, but not into this particular corner of it. More to the point, the same is true for differences in rhetorical style, the way empirical evidence is treated, the need for clear operational definitions, etc. The differences are deeper than the differences between, say, vision research and memory research. The GG stuff seems to come from a very different planet than the (other?) scientific literature I usually read.

      Is it surprising that this work is irrelevant to my students? Well, you tell me. I'm left saying (for instance) that Chomsky's work on the Language Acquisition Device and the Poverty of the Stimulus seem almost completely irrelevant to students who want to learn about language acquisition and disorders thereof. I *think* that's true. Seems odd, though.

      I wasn't complaining that the work is too far from my research and teaching interests; if that's how I felt, I wouldn't be here. Not sure what more I can say about that, except that I don't see how you got that impression from anything I wrote.

      Delete
    7. Linguists do have a relationship to data that is substantailly different from the other sciences, in several respects.

      1. They are immersed in it. Every linguist speaks and understands at least one language at a very high level, and many speak many (especially the Europeans and Asians). Anything anybody says or writes in a language you are interested in is potentially useful data.

      2. The structure of the data is completely different. We don't have nice mathematical relationships between variables, but a rather intricate distribution of 'Linguistically Significant Properties and Relations' (term partially plagiarized Larson & Segal _Knowledge of Meaning_) across overt performances (spoken, written and signed), ranging in length from one syllable to the entire Mahabharata (5-6 times longer than the entire Lord of the Rings trilolgy, iirc), although syntax mostly focusses on 'sentences' of reasonable length). So the approach has to be really different.

      3. The above problem is amplified by the fact that these performances are built by selecting from tens of thousands of lexical units (exact number impossible to estimate, but 30-60k does not seem an insane suggestion), so the example sentences have to be representatives of the set of relevant ones. E.g. when Chomksy observed the ungrammaticality (but intelligibility, two related but different LSPRs) of 'read you books on Modern Music', this is a representative of 'eat the children brocolli', 'jogs Mary every Wednesday', etc, and one of the things that linguists are good at is finding potential counterexamples, either making them upon the spot, or noticing them floating around in data-bath that they are immersed in. So if the rules were different for pronoun and full NP subjects, this would have been noticed very quickly, almost certainly by Chomsky himself prior to publication. If you can't read & understand the language, you have to take it on faith that the examples are well chosen, if you can, you can challenge them yourself, preferably after studying some textbooks to get a sense of how it's done ('syntactic argumentation').

      4. Experimenter Effects on experimental subjects other than oneself are mysteriously suppressed or even absent (Clever Hans is off in the back paddock somewhere). So, if you ask natives speakers of reasonably prestigious varieties of a language what is good/bad, appropriate/inappropriate, etc. they will usually tell you what is true, rather than what they think you want to hear. My introduction to this was trying to teach a class of Australians the promise/persuade contrast:

      John persuaded Mary to leave
      John promised Mary to leave

      and discovering that in Oz English, the second sentence is a) ungrammatical b) if it means anything, means 'John promised Mary that she would get/be allowed to leave'.

      Otoh it is clear that you will tend to perceive the results that you want to, so you have to ask other people, and it is obviously important to supplement the somewhat haphazard-looking traditional evidence-gathering methods with experiments, and see how they compare, and people do indeed to this (Sprouse & Alameida forex).

      All that said, that generative work seems to have so little bearing on helping people with language deficits.

      Delete
    8. ... is very interesting (previous post too long, evidently shortened it incompetently)

      A possible corner of the literature to look at might be an approach to second language acquisition called 'processability theory', which is concerned with problems that language learners have mastering increasingly complex structures in the languages they are learning. Many of these problems have to do with alternate ways of expressing the same 'truth functional meaning', that is, saying the same thing about whatever it is that they are about:

      Mary insulted John
      John was insulted by Mary
      John Mary insulted (special intonation required)
      John, Mary insulted him
      It is/was John that/who Mary insulted
      John is who Mary insulted

      The variations seem to be mostly triggered by the context of utterance ('information packaging', as it is sometimes called), and provide a lot of motivation for basic concepts of syntactic theory, such as 'underlying structure' (deep structure, in the original terminology). I think it would be kind of interesting if people who have L2-type problems with this kind of thing never show up in the clinic.

      Delete
    9. Interesting--(2) and (3) are somewhat specific to linguistics, but I think (1) and (4) would be viewed with great skepticism by anyone who's taken a class in research methods. Specifically:

      (1) We are certainly immersed in language. In much the same way, we are immersed in perception, memory, attention, etc. In those fields, however, immersion can *spark* further investigation ("Hey, why does that dress look different to different people?") but it's not taken seriously by itself, and in fact it's generally understood that it can mislead you.

      (4) See, I know this to be false, and that's one of the problems. For me,

      ?John promised Mary to leave

      doesn't really work. I don't think it's an issue of dialect, like "might could" or "Where's X at?" being OK in the South but not elsewhere. I *think* it means "John promised Mary that he would leave," but I could also see "John promised Mary that she could leave."

      Years ago, I gave this example to my students:

      (a) The office was filled with large, empty plastic bottles.
      (b) *The office was filled with empty, plastic large bottles.

      to show them that there were "rules" of English that they knew but couldn't really articulate. Pretty much everyone in the class would pick (a) as the better version. Then I taught online and allowed students to respond individually. 60% of them picked (a), a few of them picked (b), and 35-ish percent said that there was no difference.

      I could go on. "Instinctively, eagles that fly swim" doesn't really work for me (my first impression is that the person meant "intuitively", as in "it is intuitively obvious to me that..."), and the only reason I understand what is meant is that I know that eagles fly so that "swimming" must be what's new and different here. I can't find it, but somewhere in a Chomsky paper (maybe with Berwick or Hauser), there's a sentence that's listed as "obviously" having two meanings, and nobody I've ever asked can come up with more than one. And don't even get me started on double center embedding.

      The point isn't about these specific sentences, of course; it's that the whole approach seems a bit shaky. It seems particularly problematic given that these sentences are created and read by linguists, who probably parse sentences a bit differently from most people yet are just as susceptible to semantic satiation and the like.

      As for your last example, I've actually included that in discussions of *why* it's so hard to learn a second language; that sort of thing is just very hard to pick up. (The stimulus really is impoverished here.) From what my clinical colleagues tell me, such problems are not at all uncommon among native speakers; they're just not serious enough to warrant clinical referral most of the time. These problems can, however, lead to false alarms--if such people are evaluated secondary to a head injury, stroke, whatever, they're sometimes flagged as having an acquired language problem when really they don't.

      Delete
    10. This comment has been removed by the author.

      Delete
    11. They might be skeptical, but they'd be largely wrong, especially re 1. It not entirely clear to me why the useful data bath exists for linguistics but not for psychology, but some special features of language are:

      1) the linguistic parts of a situation are perceptually extremely salient and stand out from the rest of it.
      2) we have had decent notation for these for quite a number of millennia (almost six is my recollection, going back to Sumerian, which we can still sort of read, it appears)

      The power of these notations can be seen by the fact that you can watch a movie in another language that you don't know, and understand what's going on pretty well if someone has supplied subtitles in your language. This makes observations relatiely easy to record, and the observations are full of interesting stuff which it has been the business of linguistics to analyse. Historical syntax is for example entirely based on what can be gleaned from written records.
      A concrete example from my recent experience would be that there's a fair amount of literature about the 'polydefinite' construction in Modern Greek, where an adjective and the rest of the noun phrase can each have their own copy of the definite article: this is generally said to require a 'restrictive' interpretation of the adjective whereby it selects a subset of the reference of the rest of the noun, e.g.

      They fired the efficient the researchers

      means that they kept the inefficient ones (while "they fired the efficient researchers" could also have a 'nonrestrictive' interpretation where all the researchers are efficient, and you're just adding this information as a side-remark). But, playing some Greek music in my car, I hear Areti Ketime singing "the beautiful the blue your eyes" 'your beautiful blue eyes' (the position of possessive pronouns is wierd in this language), and, since I know that this is a line from a love song rather than a horror or scifi movie, I can be confident that the person referred two does not have a pair of not-so-beautiful blue eyes rolling around in a drawer somewhere. So there is clearly a problem with the standard claim, but what the fix is is not so clear ... maybe this rule along with various others is just tossed under a bus in song lyrics. Perhaps one way of looking at it is that a vast amount of relevant experimental work has already been done by the authors who have composed things, so why not look at.

      Moving onto 4, one bit of conventional method that survives is that if you didn't write it down, it didn't happen; I have more than once thought that a language helper had accepted or rejected something, and checked the notes, and seen zero evidence for that, so self-experimenter effects in the form of distortions of memory are definitely there, in the form of distortions of memory as well as of the apparent acceptability to you of sentences you make up. Moving on to your class experiment, you don't actually know which kind of response is closer to which order they would use in performance. In eliciting judgements from a class in the usual way, there might be herd-effects, people wanting to conform to the others, but I think online tests can have their own issues (people normally use scales of various kinds, not straight yes or no), what they will NOT normally do is tell you what you want to hear. The class signal was stronger than they online experiment, and that might show a problem, but they both point in the same direction.

      This is probably enough for now! I do agree with the idea that there needs to be a bit of a data cleanup initiative, and, indeed, it is happening.

      Delete
  5. This comment has been removed by the author.

    ReplyDelete
  6. | Well, I'm not sure what Chomsky's "popular" works, lectures, etc., are for if they're not to provide insights about his views on language.|

    Perhaps to provide a simple story of what the fuss is all about without, either, getting in to the mathematical underpinnings of his syntax, or piling on wild neo-Darwinian speculations and presenting them as facts?

    |I mean, maybe The Science of Language was just intended to make a quick buck [...] |

    As opposed to, say, making strawman claims about blank slates and blaming the Vietnamese people for their death by Agent Orange in cheap paperbacks?

    | If there's a set of Chomsky's work that I'm supposed to heed and a set I'm supposed to ignore[...] |

    Perhaps start with Aspects and then work your way down to his recent works with Yang, Crain, Lidz, Lewontin et al.?

    This kind of rhetoric is neither fooling anyone, nor putting any scratches Noam's legacy.

    Besides, that blog post by Bishop reeks of (deliberate?) ignorance. To point out just a few:

    - Trying to establish children's linguistic knowledge from their language usage or their awareness skills is not exactly one that holds a lot of credibility. Frank Ramus, for instance discusses the acquisition of Phonology, and reached the conclusion that psycholinguistic data from most experimental studies are only interpretable within an information processing model that sufficiently specifies details of different levels of representation (Ramus et al. 2010)

    - There’s a ton of evidence that establish that children process syntactic categories online long before their language use can give that away, and that such is not explainable based on statistical explanations along. cf. Bernal at al. 2010

    - Tomasello’s theory, if it can be called a theory, has laughably little explanatory power built in.

    | perhaps the most cogent one came from a colleague who observed that "speculations about universal grammar/LAD/common language learning abilities are prima facie irrelevant to kids who don't learn language in the usual way." |

    The transparently deliberate ignorance of that statement aside, perhaps look into the works of Max Coltheart, Stephen Crain et al. ?

    It is perfectly fine to try to undermine Chomsky, or his monumental contributions to both formal academic matters as well those concerning social justice. There’s nothing new about those. They are transparent, and have been since the 1960s when he so courageously led the protests against neo-imperial terrorism in Indo-China. But to pretend that you can pass them off as objective, apolitical, scientific truth with conclusive evidence behind them is not just ineffective, but only makes oneself look clownish to those who are ignorant of neither history nor science.

    ReplyDelete
    Replies
    1. I'm relatively unfamiliar with (and uninterested in) Chomsky's political work; it's irrelevant here. The snark merits no reply, nor does this fawning take on Chomsky, nor does your aping of his rhetorical style. I'll respond to a few key points:

      "Perhaps to provide a simple story of what the fuss is all about"

      At that he fails. When I read a "popular" work and discuss it with an expert, I normally know where the overgeneralizations are, or an expert can give me a "Yes, but..." response. Here I'm just told that I've completely misunderstood (perhaps "deliberately"). Now, perhaps this is just a personal failing...but when people in closely-related fields (Bishop) and undisputed experts in the same field (Pullum) get the same response, one begins to wonder. I haven't seen anything like it outside of the reaction to critics of Wolfram's New Kind of Science.

      "Trying to establish children's linguistic knowledge from their language usage or their awareness skills is not exactly one that holds a lot of credibility."

      Wow. So studies of overregularization, anaphoric "one," deixis, etc., don't have a lot of credibility? Lidz's recent work on Korean doesn't have a lot of credibility? I'm sorry, but this work (well, not Lidz's, at least not yet) is covered in introductory undergraduate coursework.

      "There’s a ton of evidence that establish that children process syntactic categories online long before their language use can give that away...Bernal et al. 2010"

      Bernal and colleagues are trying to determine this using ERPs. Their argument depends on reverse inference, which has been considered dubious for at least a decade; see Poldrack's famous paper in TICS for an explanation of the underlying elementary logical fallacy.

      "The transparently deliberate ignorance of that statement aside..."

      I don't know why you'd expect that anyone would be deliberately ignorant of anything. But I'll ape your aping of Chomsky's style to show you how annoying it is. My colleague's statement was obviously true, in fact a truism, as anyone with a basic understanding of the underlying reasoning would immediately appreciate.

      See how irritating it is?

      Delete
    2. | I'm relatively unfamiliar with (and uninterested in) Chomsky's political work; it's irrelevant here. |

      Yes, it is, in fact, irrelevant here. Or rather, it would be if you weren't trying to make snarky remarks about quick bucks. And that about someone who, unlike you, has spent his life opposing power and unjust authority. And of course you are disinterested in his political works! How else would you pass off western apologetics for imperialism as objective and "evolutionary" truth? I would have never brought up his politics if you hadn't been arrogant enough to think you can mouth off at someone, and no one will notice. I suggest you take some of your own advice.

      |The snark merits no reply,|

      Right back at you.

      | nor does this fawning take on Chomsky, nor does your aping of his rhetorical style |

      As opposed to your oh-so-un-fawning takes on Darwin? Trying to show that you have been trying to be snarky about the contributions of someone, or pointing out your near ad-hominem attacks is not "a fawning take on Chomsky", nor is it aping his supposedly rhetorical style.

      In fact, for someone who talks about a distinction between "respectable" and "disrespectable" countries, tries to attribute genocides committed by the west in Indo-China to the unwillingness of the victims to role over and die, and considers the presence of women's and native rights issues in a public march a distraction from Science, it's a little rich to be pointing fingers at others for their rhetoric.

      But of course, screaming about "rhetoric" whenever someone points out your use of shoddy, escapist logic to pass off your own distorted world views as objective, apolitical, scientific truth has always been a standard defense mechanism for you. It was always the same excuse with Gould, Rose, Lewontin, Chomsky, Fodor. Everyone has a motif, except you. Sorry, I am neither impressed nor intimidated. Merely exasperated.

      |
      At that he fails. When I read a "popular" work and discuss it with an expert, I normally know where the overgeneralizations are, or an expert can give me a "Yes, but..." response. Here I'm just told that I've completely misunderstood (perhaps "deliberately"). Now, perhaps this is just a personal failing...but when people in closely-related fields (Bishop) and undisputed experts in the same field (Pullum) get the same response, one begins to wonder. I haven't seen anything like it outside of the reaction to critics of Wolfram's New Kind of Science. |

      As others have pointed out before in other comments here, you do seems to be both (a) ignorant (deliberate or accidental) of the formalisms of GG, and (b) unwilling to get into the details, for whatever reasons.




      Delete
    3. |Wow. So studies of overregularization, anaphoric "one," deixis, etc., don't have a lot of credibility? Lidz's recent work on Korean doesn't have a lot of credibility? I'm sorry, but this work (well, not Lidz's, at least not yet) is covered in introductory undergraduate coursework.|

      Again, trying to constantly shift positions to attribute your own interpretations to others is not accomplishing anything. I never said those studies are irrelevant. I was making a point that there are tons of studies that show that there is often a gap between what children know about their language, and what they are able to give away in the form of externalization (due to various developmental factors). Of course, such gap is not conclusive arguments for a rich representation in and of themselves. But they do highlight the ridiculousness of Bishop's argument that "if I can't see it, it does not exist".

      | Bernal and colleagues are trying to determine this using ERPs. Their argument depends on reverse inference, which has been considered dubious for at least a decade; see Poldrack's famous paper in TICS for an explanation of the underlying elementary logical fallacy.|

      No one claimed that the evidence was final, conclusive and irrefutable. No one even claimed that they were not nuanced. Except may be you. You seem to struggling under the delusion that every evidence someone else presents is nuanced, as they often are, but what you throw in others faces is the supreme jewel of truth. That same argument you make can be applied to the paper(s) you cite. See, for instance, the works of Colin Phillips, David Poeppel, William Matchin, Stephen Crain, Max Coltheart et al.

      | I don't know why you'd expect that anyone would be deliberately ignorant of anything.|

      Obviously to avoid being faced with the nuances of criticisms that they want to pass off as absolute and unfallible.

      | But I'll ape your aping of Chomsky's style to show you how annoying it is. My colleague's statement was obviously true, in fact a truism, as anyone with a basic understanding of the underlying reasoning would immediately appreciate. See how irritating it is? |

      Really??? You are going to blame me for that? You are the one who has been crankishly trying to mock Noam all along, and then when people point that out you blame them for your rhetorical style? But why not? You seems to have a gift for blaming the victims for their fate anyway. I believe Jerry Fodor put it best:

      "If you really must have a defense mechanism, I recommend denial. It’s specialcharm is that it applies to itself, so if it doesn’t work, you can deny that too." -- Jerry Fodor

      Now, I am sure you will find ways to blame it all on someone else, but I tire of your antics. If, ever, you have something meaningful to say, some meaningful criticism of Noam's works (as there are many, including ones by Putnam, Kripke and the like) that you can deliver without your usual pretense of being the messiah of objectivity, I am happy to discuss them with you.

      But I am tired of this nonsensical diatribes with the same undertones every time -- everyone has a motif but me. No one denies that every one has a motif, but so do you. Science has never been apolitical, and others' works or theoretical preferences are no more impacted by subjective and personal biases than your own. Since you seem to be incapable of grasping that simple fact, your childlike cleverness with words does not impress me.

      Delete
    4. @anon lin g &Steven P:
      It has been fun, but it is getting a bit intense (and not in a good way). Let's tone the personal animus down or I will take the editor's perogative and block your comments. Please feel free to battle it out, but a little more good humor please. I know this sounds odd coming from me, but so be it. Thx.

      Delete
    5. Thanks @Norbert, and apologies for my part in it. I usually don't comment, being more interested in getting the various perspectives from more experienced people here. But I do take it personally when mockery (as opposed to thoughtful criticism) and misrepresentation is directed at Noam, and frankly having seen this for a while I had to say something.

      Delete
    6. This comment has been removed by the author.

      Delete
    7. Sorry--I do get caught up in it, particularly when ideas (and people) in my field are called "dumb," and particularly when the description of my field doesn't match my own impressions. Your blog, your rules; I'll try to tone it down.

      Delete
    8. "I never said those studies are irrelevant. I was making a point that there are tons of studies that show that there is often a gap between what children know about their language, and what they are able to give away in the form of externalization"

      If you reread, I didn't say that you said they were irrelevant. I was referring to your claim that they weren't credible. To me it seems that they are. As I've said elsewhere, it's as though we're coming at it with different priors--if you're already well convinced by Chomsky, then Lidz's work is dynamite and this other stuff isn't; if you're not, then it's the other way around. This might just be intractable; that's one of the things that I'm wondering about.

      "No one claimed that the evidence was final, conclusive and irrefutable."

      No, but you claimed it was relevant or illustrative. I'm sorry, but it isn't. Your points necessarily rest on data and evidence, and if that crumbles, your points have to go with it. That's not personal; it's just the way evidence-based argument works.

      I can't really follow much of the rest of what you wrote--I haven't written anything about Darwin here, nor Agent Orange, nor the rest of it. If I were Steven Pinker, your comments might be more on point (I don't know, because I don't read his political stuff either). But I'm not.



      Delete
    9. | If you reread, I didn't say that you said they were irrelevant. I was referring to your claim that they weren't credible. To me it seems that they are. As I've said elsewhere, it's as though we're coming at it with different priors--if you're already well convinced by Chomsky, then Lidz's work is dynamite and this other stuff isn't; if you're not, then it's the other way around. This might just be intractable; that's one of the things that I'm wondering about. |

      Finally, something we can agree on! And I would agree with you that I don't know if there's any way around this epistemological divide?

      But I disagree with your views on child acquisition. Have you looked at the works of Colin Phillips and colleagues? I would recommend, for instance, Phillips & Ehrenhofer, 2015; Conroy et al. 2009; Goro et al. 2007; Kazanina & Phillips, 2010 etc.

      Not only are Chomsky's works relevant, and in ways that are of utmost importance, but the structures and dependencies he formulate guide a lot of what children do and don't do! It bridges the gap that was once thought to separate child and adult minds, for instance in interpretation of anaphora.

      The claim is not that whatever Chomsky has proposed is spot on and no improvements are needed. Chomsky himself has revised his positions, and so have his students over the years, to arrive at the current state of affairs. These too are subject to revisions. But that's not the point. The point is that he has proposed detailed mathematical interpretations of linguistic computations in the mind, and those have guided almost all current research in neuro and biolinguistics. See for instance, Ding et al.2015, 2016 from the Poeppel lab, and Xiang et al. 2009 from Colin Phillips' lab! Not all the evidence has been positive, and some interpretations of Chomsky's proposal have turned out to be less valid than others. That's how Science is done. But to claim that Chomsky's works are irrelevant is so ludicrously absurd that one would not know where to begin to deconstruct such a claim. The papers you repeatedly seem to point to, including the absurdist comedies of Scholz & Pollum, either fail to see the point to Chomsky's proposal, or confuse Chomsky with Greenberg (a common thing).

      You would be well advised to ignore Pollum, Tomasello and that kind, and look into the commentaries of much more insightful scientists, such as the Nobel laureate micro-biologist Salavadore Luria or the computer scientists Donald Knuth or Richard Mathew Stallman, for a better understanding of what you want to criticize. There are things within GG that need criticisms and revisions, but so far you have not managed to illuminate even one of them.

      Delete
    10. @Anon Ling: I am a big fan of both Knuth and rms but what parts of their work in particular are you suggesting that we should read?

      Delete
    11. I can't recall specific citations for Knuth off the top of my head, but the one that I have lying around is Knuth (2003). The preface contains this that I had highlighted a while back...

      "[...]And people began to realize that such methods are highly relevant to the artificial languages that were becoming popular for computer programming, even though natural languages like English remained intractable. I found the mathematical approach to grammar immediately appealing --- so much so, in fact, that I must admit to taking a copy of Noam Chomsky's Syntactic Structures along with me on my honeymoon in 1961.During odd moments, while crossing the Atlantic in an ocean liner and while camping in Europe, I read that book rather thoroughly and tried to answer some basic theoretical questions. Here was a marvelous thing: a mathematical theory of language in which I could use a computer programmer's intuition! The mathematical, linguistic, and algorithmic parts of my life had previously been totally separate.[...]"

      As for RMS, try the formal documents for the HURD Kernel. RMS has always been outspoken about Noam's influence on him, both politically and scientifically. In fact, the self-recursive acronyms** he uses for his creations, in his own words, are tributes to Noam's approaches to formal languages, automata theory and his works with Schutzenberger on enumeration and representation theorems.

      GNU --> Gnu is Not Unix
      HURD ---> HIRD (--> HURD of Interfaces Representing Depth) of Unix Replacing Daemons.

      Delete
    12. But how is this relevant at all to GG or to linguistics? I know that Chomsky has been broadly influential outside linguistics, but that isn't what we are talking about here.

      Delete
    13. I'm aware that Chomsky was influential in computer science, particularly in the development of formal languages. I'm certainly not competent enough to understand the details, but Knuth and others have said it quite strongly. (I know who RMS is but don't know much about his views of Chomsky.)

      As for Pullum, Scholz, Behme, Dabrowska, Saffran,the Shaywitzes, MacWhinney, Bates, Tomasello, and plenty of others...I would certainly NOT be well-advised to ignore them. Tomasello, at the very least, has interesting data that bear on real questions (the theory isn't there yet). Why would I, or anyone else, ignore that? Knuth is great, but I'm more interested in humans than in computers.

      Delete
    14. "Tomasello, at the very least, has interesting data that bear on real questions (the theory isn't there yet)"
      I would love to have you educate us about this. How would you like to write a post on this and I will put it on FoL: a a discursive account of what T has done and why we should be interested. You know a review of the argument and the relevant data (at least on outline form). I would be very interested and it would be a public service.

      Delete
    15. I second Norbert's suggestion, and would read such a post with earnest.

      Delete
    16. That's fair enough. I'll give it a shot. It'll take me some time, since this isn't my main research area, but I need to do it for one of my classes anyway. Then you can all have a whack.

      Delete
  7. It ruins my day when people like Steven P take jabs at The Science of Language: Interviews with James McGilvray without understanding that it's a remarkable book, particularly intended for philosophers. (Even if one is convinced by the works of Kripke, Putnam, and Wittgenstein that "meaning is irreducibly social," it's worth trying to understand the arguments).

    ReplyDelete
  8. The two kinds of work that have been awarded Nobels involve place cells and grid cells. The former involve the coding of direction, the latter coding distance.

    Place cells get their name because they tend to fire at single points in an environment. They sometimes show selectivity for direction but not always. Conversely, grid cells have multiple firing fields which are positioned as if sitting on the vertices of tessellated equilateral triangles. Collective grid cell activity is more like a positioning system than a simple coding of distance. There are some theoretical models of how grid networks could be used to perform path integration-like operations (e.g. http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000291) although afaik there’s no conclusive understanding on that point yet.

    The exact relationship between place- and grid-cells isn't fully understood either. But one idea is that the job of place cells is to provide the excitation for grid patterns, and are therefore not necessarily part of the spatial code per se (http://www.nature.com/neuro/journal/v16/n3/full/nn.3311.html).

    As readers of FL know, this is a point that Gallistel and colleagues have been making for quite a while now

    The existence of grid cells largely undermines Gallistel's original argument for hard-symbolicism. His line of reasoning was something like:

    1. Path integration requires an internal representation of the animal's position in space.
    2. Numerical coordinates are a spatial representation which makes path integration very easy.
    3. Distributed neural networks are bad at counting, but von Neumann architectures are very good at it.
    4. Therefore: the brain must have strictly local and symbolic (von Neumann-esque) memory locations, which by definition would be intracellular (e.g. using RNA).

    But grid networks are a kind of distributed population code (and a very elegant one at that: http://www.nature.com/neuro/journal/v14/n10/abs/nn.2901.html), making everything from point 2. onwards redundant. That is, we don't need to posit an intracellular system for representing numbers because grid cells have already solved the problem of how to neatly encode spatial positions in distributed networks.

    Moreover, arguably the most successful class of models for the emergence of grid patterns are attractor models (e.g. https://www.ncbi.nlm.nih.gov/pubmed/19021261). These networks typically show grid patterns emerging with the basic ingredients of synaptic plasticity, environmental input, and excitatory/inhibitory modulation of firing rate, i.e. all the things that Gallistel claimed couldn't plausibly account for path integration. Gallistel & King even devoted an entire chapter of their book to critiquing an attractor network, so the fact that grid cells turned out to be so amenable to modeling with attractor dynamics hardly seems like a mark in Gallistel’s favour.

    So, yes, grid cells are a bona fide internal representation. But they are radically different from anything in a von Neumann architecture, and instead seem almost perfectly adapted to the problem of representing information in self-organizing networks.

    ReplyDelete