I ran across this piece by Cristof Koch and Gary Marcus (K&M) on the current state of neuro-knowledge. The review is pretty anodyne, noting that spikes have a lot to do with it. As K&M puts it:
"the basic unit of neuronal communication and coding is the spike (or action potential)."
I found the conjunction above intriguing- communication and coding- for these are two quite different things. Later on K&M further suggest that spikes are more important for the first than for the second function, pointing to "rewiring of our neural networks" as the basic mechanism for coding of long term memories. This sounds pretty much like the standard "connectionist" trope, "changing connection weights" being the abstract mechanism for "rewiring." As K&M notes this approach has had some successes, and one big failure to date. Language and its mechanisms has so far resisted insightful analysis. Here's the quote:
"Another mystery concerns how the brain represents phrases and sentences. Even if there is a small set of neurons defining a concept like your grandmother, it is unlikely that your brain has allocated specific sets of neurons to complex concepts that are less common but still immediately comprehensible, like “Barack Obama’s maternal grandmother.” It is similarly unlikely that the brain dedicates particular neurons full time to representing each new sentence we hear or produce. Instead, each time we interpret or produce a novel sentence, the brain probably integrates multiple neural populations, combining codes for basic elements (like individual words and concepts) into a system for representing complex, combinatorial wholes. As yet, we have no clue how this is accomplished."
It is nice to see this in print. A standard critique of linguistics is that its algebraic approach to structure fits ill with the connectionist passions of neuroscientists. However, this kind of problem cuts two ways: either the neuroscience connectionism needs revision or algebraic linguistics does. K&M clearly considers the first option a live possibility, and this is as it should be.
One thing I did find disappointing, somewhat, given the joint authorship, is a reluctance to address the algebraic issue. Marcus wrote a terrific book on the problems that connectionist models have with certain kinds of problems that are easy to model algebraically. It would have been nice to have this issue aired in a review written by two people who generally take different positions on these issues. More specifically, the problem is not just language, but problems with algebraic structure quite generally (language being the poster child). Gallistel (and Marcus) have forcefully made this point, suggesting that these kinds of problems, which are ubiquitous in both humans and other animals (ants!) suggests that we need a radical rethinking of how brains compute (here). Unfortunately, K&M don't mention these concerns, except obliquely as in the quote above.
One intriguing point K&M makes concerns how many codes the brain uses. They say that the current wisdom has it that the brain employs many many different codes, indeed, a "chaos of codes," and this contrasts with the universal kind of coding the genome exploits:
"But the challenge is that spikes mean different things in different contexts. It is already clear that neuroscientists are unlikely to be as lucky as molecular biologists. Whereas the code converting nucleotides to amino acids is nearly universal, used in essentially the same way throughout the body and throughout the natural world, the spike-to-information code is likely to be a hodgepodge: not just one code but many, differing not only to some degree between different species but even between different parts of the brain. (my bold NH) The brain has many functions, from controlling our muscles and voice to interpreting the sights, sounds, and smells that surround us, and each kind of problem necessitates its own kinds of codes."
I was not actually all that clear why we should expect the brain code to be a hodgepodge just because the brain has "many functions." After all the genome builds many different kinds of organs and regulates many different kinds of processes, yet it seems that there is a single common code. But this may depend on what one means by "common code." Again, Gallistel's (and Marcus') conjectures are illuminating here: there are certain properties that a common computational architecture would demand, whose physiological basis, if his conjecture is correct, still remain obscure. This kind of architecture can run many different kinds of codes (just as your laptop can run many different kinds of programs), despite the uniformity of the coding mechanisms (read/write memory, variable binding, indices etc).
A similar point can be made taking language as a model. Language can be used for many different ends (functions) from joke telling to quantum mechanics to gossip to prayer. Nonetheless, the underlying code is the same. There is a common FL code underlying the multiplicity of functions/uses to which the system is put. At any rate, though K&M's claim might be right, I suspect that it's shying away from the algebraic challenge that cognition poses also leads the review to downplay the possibility of common neural structure. Maybe looking at the "spike-to-information" problem is obscuring the possibility of a common underlying computational mechanism. At least this is what I suspect Gallistel (and Marcus in other venues) might say.
The K&M piece is worth a read. It reports the current accepted wisdom and also points out the remaining "mysteries." My take away is that we know relatively little about how brains operate, (despite having made lots of progress in some ways). K&M agree that this is so in domains like language, and this is news worth spreading.
No comments:
Post a Comment