David Poeppel and Michael Gazzaniga put together a great ideas symposium at the last CNS meetings. I heard about the even from Ellen Lau who thought it great fun, as well as provocative and instructive. Boy, was she dead on. I watched the presentations (here) and there is a little commentary and useful links here. The latter also links to the videos if you want one stop shopping. The videos are short, 20 minutes, and so easily watchable. I particularly urge you to look at the first two, Gallistel and Ryan and the last one by Krakauer (I really liked it and he was a hoot). But they are all excellent and given a good sense of what is going on now.
A word about Randy's presentation: the thing that struck me (again) is how coherent the picture he is painting is. There are is a deep story here and it starts from what seem innocuous starting points but quickly get one into deep waters. Curiously (and importantly), both he and Ryan (who seems to disagree with most everything Randy put forward (though I really didn't see how he did or even that he did))agree that the idea that memory is coded in synaptic weights in neural nets is a DEAD idea. Ryan noted that it has been categorically shown to be wrong. There may be something to synaptic connections, but it is NOT weights and adjustments to them. This seems like a real big deal to me if this is the cog-neuro consensus now. It puts a very deep nail into standard connectionist-associationist conceptions of mind/brain.
A few more remarks: First, it looks like modularity is back big time. Everyone buys into it. Krakauer noted that even (especially) the deep learning people are buying into this big time, shut when cog-neuro types were running away from the idea. He notes the irony here. But it looks like this idea is back and everyone loves it again.
Second, instinct is back and so are biologically rich constraints on learning. See Krakauer again on "cost functions" and how they are intrinsic to the systems and where all the action is. Yes, I was nodding my head in agreement.
So modules, innate cost functions, classical computations and even recursion. All topics here and well received. It is a good time to look to tie GG to cog-neuro. Watch the videos and enjoy.
Gallistel is a great guy who has done some fascinating research, but I have to say that I didn't find much of interest in what he had to say. Some of it seemed quite vague (I know it's a "big ideas" talk, but still), some of it was flat wrong, and honestly a lot of the time I wasn't sure what the point was supposed to be.
ReplyDeleteTake the idea of Shannon information. Gallistel quotes Shannon as saying that the semantic content of communication is irrelevant to the engineering problem. But it isn't, of course; the visual system is tailored to handle visual information, just as the graphics card in a gaming computer is designed to handle the manipulation of image data. He goes on to say that "when you're born, you don't know what you're going to run into," which is also clearly false. All organisms have an evolutionary history that has prepared them for certain things--so there are things like instincts as well as species-specific defense reactions, where the mechanisms are pretty well worked out. Even when a particular behavior can be "pushed around" a bit more, there's still evidence of a biological predisposition; there's a reason it's easier to induce fear-conditioning to snakes than to kittens.
Then there's this bit about numbers. Computer scientists are actually dealing with circuits, of course; it's just that, unless you're dealing with practical engineering issues, it's convenient to *represent* circuits as numbers; it allows you to simplify things in a powerful way, and it enables you to perform all sorts of useful computations. It's not clear to me, however, what is being represented by whatever numbers Gallistel is referring to, nor how it's useful to think in that way. So yeah: Huh?
The discussion about Purkinje cells is the one empirically-grounded part of the talk, but even here I wasn't convinced. Gallistel says that the engram is *either* in the Purkinje cell *or* in the synapse, but that's a false dichotomy; changes at the synapse are *mediated* by what goes on in the cell (I'm thinking of things like ion influx, calcium/calmodulin-dependent protein kinase 2, up- and down-regulation of receptors, etc.). And synapses do more than amplify and deamplify; there are things like temporal summation, etc., that are more complicated than that. (I think he is trying to criticize connectionist models rather than our actual understanding of the synapse, but I'm not sure.)
So, in spite of all the molecules and numbers, this didn't do much for me. It seems like Gallistel should know all this, but he didn't seem to. I don't get it.
I haven't watched the video yet, but two nitpicks regarding more general points you make:
Deletethe graphics card in a gaming computer is designed to handle the manipulation of image data
This is false. Graphics card have been general-purpose computing device for over 15 years now, that's why they're called GPUs nowadays. They're used in tons of computing tasks where you need many cores, which range from 3d graphics to video encoding to parallel parsing and physics computations. And the reason that these things are possible is because computation is content agnostic. If two problems have comparable properties, it doesn't matter that one is about X and the other one about Y.
If you want an even more extreme example, play Doom on the Atari Jaguar (if you can find a working one). You'll notice that the famous Doom soundtrack is missing, and that's because in order to compensate for the underpowered CPU of the Jaguar, they had to outsource some of the game logic to the sound chip, turning it into a co-processor.
Computer scientists are actually dealing with circuits, of course
Again, no. Computer scientists deal with computation, and the hardware implementation thereof rarely matters. You could implement a computer as a line of water buckets and the essential theorems would still hold. Even the theory of algorithms and data structures, which is closer to hardware, would be unaltered. A linked list will always make insertion of new elements a constant time operation, whereas an array will always make accessing an element a constant time operation. None of that is dependent on hardware. Where hardware starts to matter a great deal is compiler design, but that is a niche topic with comparatively little influence on the what aspects of computation are studied.
I suppose it depends on exactly where you're drawing the line between the graphics card and other stuff. The frame buffer, the RAMDAC, etc., are pretty clearly designed for visual information. Some of these components *can* do other things, albeit less efficiently, but that doesn't change the fact that the card is set up for one specific purpose. Just in the same way, the visual system *can* take on some nonvisual spatial/auditory information in people who are blind, but it's not the same.
DeleteAs for your second point, I'm not sure we're disagreeing. Any program you like is implemented in circuitry, since nobody ever really does it in buckets; it's just that it's useful to *represent* it as numbers. (Or, if you prefer, the circuitry is one way to implement the computations.)
None of that really explains whatever it was that Gallistel was getting at--the people who create network models and the like are certainly aware of computation, etc., and it's not clear how Gallistel's "numbers" are different.
Incidentally, I just watched Ryan's video, and I was gratified to see that he made some of the same points, while doing a much better job of dealing with the whole Shannon information idea.
I think Thomas's point still stands, that computer scientists mostly don't care about the implementation, but on the computation (if you'll permit slightly Marrian terms). How, say, a finite-state machine is realized means rather little to the computations it performs.
DeleteYou're right that Gallistel sidesteps a lot of the important issues, and the Purkinje discussion is a bit oversimplified. But in my opinion looking to psychologists or even to cog-neuro people is misguided for these types of questions. The field of computational neuroscience has exploded in the last 20 years and there are many exciting results on exactly the issues you raise here with respect to implementation/circuits.
I would also note that the molecular idea of bio computation is not unique to Gallistel. Christoff Koch, for example, made the same point at least as early as his "Biophysics of Computation" book, which has a chapter on it.
Delete