Jeff Lidz sent me this great little piece by Randy Gallistel on his favorite theme: how most neuroscientists have misunderstood how brains compute. I’ve discussed Randy’s stuff in various FoL posts (here, here, and here). Here in just four lucid pages, Randy makes his main point again. If he is right (and the form of his argument seems impeccable to me), then much of what goes on in neuroscience is just plain wrong. Indeed, if Randy is right, then current neo-connectionist/neural net assumptions about the brain are about as accurate as 1950s-60s behaviorist conceptions were about the mind. In other words, at best of tertiary interest and, more likely, deserving to be completely forgotten. At any rate, Randy here makes four main points.
First, that there is recent evidence (discussed here) strongly pointing to the conclusion that information can be stored inside a single neuron (rather than in connections of many neurons).
Second, that there is scads of behavioral evidence showing that brains store number values and that there is no way of storing numbers this in connection weights, thus implying that any theory of the brain that limits itself to this kind of hardware must be at best incomplete and at worst wrong.
Third, that there is a close connection between neural net “plasticity” conceptions of the brain and traditional empiricist conceptions of the mind (especially learning). In fact, Randy argues that these are largely flip sides of the same coin.
Fourth, that brains already contain all the hardware that is required to function like classical computers, the latter being the perfect complements for the computational cognitive theories that replaced behaviorism.
And all in four pages.
There is one argument that Randy hints at but doesn’t stress that I would like to add to his four. It is a conceptual argument. Here it is.
Whatever one thinks of cognition, it is clear that animals use large molecules like DNA and RNA for information processing. Indeed, this is now standard biological dogma. As Gallistel and King (here) illustrates, this system has all the capacities of a classical computer (addresses, read-write memory, variables, binding etc.). So here’s the conceptual argument: imagine that you had an animal with the wherewithal to classically compute hereditary information but instead of repurposing (exapting) this system for cognitive ends it developed an entirely different additional system for this purpose. In other words, it had all it needed sitting there but ignored these resources and embodied cognition in a completely different way. Does this seem plausible? Is this the way evolution typically works? Isn’t opportunism the main mover in the evolution game? And if it is, doesn’t this suggest that Randy’s conjecture must be right? In fact, wouldn’t it be weird if large chunks of cognition did not exploit that computational machinery already sitting there in DNA/RNA and other large molecules? In fact, wouldn’t the contrary assumption bear a huge burden of proof? Well, you know what I think!
Why is this not the common perception? Why is Randy’s position considered exotic? Here’s the one word answer: Empiricism! In the cog-neuro world this is the default view. There is little to empirically support this conception (see here for a review of the pas de deux between unsupported empiricism in psychology and tendentious reasoning in neural net neuroscience). Indeed, it largely flourishes when we know next to nothing about some domain of inquiry. However, it is the default conception of the mind. What Randy is pointing out (and has repeatedly pointed out and is right to point out) is that it is fatally flawed, not only as a theory of mind but also as a theory of the brain. And its flaws are conceptual as well as empirical. I can’t wait for the day that this becomes the conventional wisdom, though given the methodological dualism characteristic of the cog-neuro-sciences, I suspect that this day is not just around the corner. Too bad.