I here
linked to a Youtube video of a talk that Randy Gallistel gave in Boston that goes
over his argument for an intra-neuron based conception of brain computation.
This recapitulates much of what he argues for in his paper that I posted (here),
although it only goes over one of three recent experiments that supports this
view in detail (btw, the other two are also pretty neat). It is well worth watching for the
presentation is very clear and easy to follow.
It also has an added bonus: a skeptical commentary by John
Lisman from Brandeis. Lisman argues that the results that Randy points to are
not inconsistent with a more classical inter-neuron circuit based conception of
brain computation. He reviews some of his own work in this regard, which is
interesting.
I personally found a few other things interesting as well.
First, Lisman has two main arguments, both promisory. The first is that though
he agrees with Randy that the LTP/D evidence does not support the required
timing for learning that it has been pressed to serve, this does not entail
that some other version of the theory might
not be serviceable. The second is that he has recently shown how to block/erase
LTP/D accretions and hopes to run an experiment showing that this suffices to
also erase a memory. He notes that should he be able to do this it would
provide evidence for the view that the inter-neuron LTP/D mechanism underlies
memory. This experiment has not yet been
run, hence my description as ‘promisory.’
Randy notes in his talk and paper that the LTP/D theory of
memory as strengthened connections among neurons in a net is a venerable
theory. It’s been around for a very long time. Lisman seems to agree. I thus found
it interesting that Lisman’s retort did not proceed by citing chapter and verse
of results in favor of the view but was largely defensive: a “that this is
wrong does not mean that some version may be right” strategy for defending it.
Second, I found it interesting that Lisman did not seem to understand Randy’s
request to outlined how numerical information could be stored in a synapse in a
way that it could be used. Randy wanted a description of a general mechanism
for doing this independently of how this information was then put to use.
Lisman kept retorting how this or that particular phenomenon might be modeled.
In Randy’s view memory is the capacity to code information for storage that can
later be retrieved. There is no mention as to how this information will be
used. Presumably stored information can be used in multiple ways. For Lisman
memory is not something that is use neutral but a link between one behavior and
another. Lisman, in effect, does not seem to understand what Randy is asking
him to provide, or, to be more charitable (as I should be) he rejects the idea
that memory is disconnected from the uses it is put to.
I mention this for it reflects a point that is a central
feature of Empiricism: identifying what something is as what that thing does.
I’ve discussed this before (here), but
I liked this particular example of the difference in action. Let me say a bit more.
Many (me included) have tended to take the salient property
of Empiricism to be its penchant for associationism. Randy points to this in
his paper and lecture and he is, of course, correct to note the strong
relationship. However, I now think that the deeper feature of Empiricism is its
identification of the “powers/nature” of a thing (these are Cartwright’s terms,
see above link) with what it does. Rationalists reject this. To specify the
powers or nature of something is to provide an abstract description of its
properties. What it does is a complex interaction of these properties with
other things. So when Randy asks for a
physical basis for memory he wants an account of how to store information in a
retrievable form independent of how this information might be later used and
manifested. The mechanism is general. The occasion of use is just one
manifestation of it. For Empiricists what something is just is a summary
(perhaps statistical) of its effects. Not so for the Rationalist.[1]
Last point: Lisman takes Randy as arguing that brain
computation must supervene on DNA/RNA structure. Randy points out both in the
paper and in the talk that this is not a central feature of his view. We know
how information is coded in DNA/RNA and so it provides a proof of concept for
intra-neuronal computation that DNA/RNA machinery could provide the physical
mechanisms required for such computation. But, as Randy notes that the
cognitive machinery be DNA/RNA is not necessary to his main point (see p. 6).
Other kinds of molecules can serve to undergird such computation (the most
relevant feature, it seems, having two stable states thus allowing the molecule
to serve as a switch coding 1/0). This said, Randy notes in the video that we
currently do not know much about how information is coded within the neuron. Thus,
if Randy is correct, these very important details remain to be developed.
Randy’s argument is that this is a reasonable place to look for the
computational bases of cognition given that the apparent failure of the inter-neuron
conception and the evidence that is amassing that individual cells (rather than
networks) do in fact store acquired information in usable form.
To end: As I’ve said before, I find this to be really
exciting stuff. So, watch the video, it’s really fun.
No comments:
Post a Comment