Here’s
a paper that I just read that makes a very interesting point.[1]
The paper by Eric Jonas and Konrad Kording (J&K) has the provocative title
“Could a neuroscientist understand a microprocessor?” It tests the techniques
of neuroscience by applying them to a structure that we completely understand and asks if these techniques allow us to
uncover what we know to be the correct answer. The “model system” investigated
are vintage processors used to power video games in very early Apple/Atari/Commodore
devices and the question asked is whether the techniques in cog-neuro can
deliver an undergrad level understanding of how the circuit works. You can
guess the answer: Nope! Here’s how J&K put it:
Here we will try to understand a known artificial
system, a historic processor by applying data analysis methods from
neuroscience. We want to see what kind of an understanding would emerge from
using a broad range of currently popular data analysis methods. To do so, we
will analyze the connections on the chip, the effects of destroying individual
transistors, tuning curves, the joint statistics across transistors, local
activities, estimated connections, and whole brain recordings. For each of
these we will use standard techniques that are popular in the field of
neuroscience. We find that many measures are surprisingly similar between the
brain and the processor and also, that our results do not lead to a meaningful
understanding of the processor. The analysis cannot produce the hierarchical understanding
of information processing that most students of electrical engineering obtain.
We argue that the analysis of this simple system implies that we should be far
more humble at interpreting results from neural data analysis. It also suggests
that the availability of unlimited data, as we have for the processor, is in no
way sufficient to allow a real understanding of the brain. (1)
This negative result should, as J&K puts it, engender
some humility in those that think we understand how the brain works. If J&K
are right, our techniques are not even able to suss out the structure of a relatively
simple circuit, which, in most ways that count, should be more easily
investigated using these current techniques. We can after all lesion a circuit
to our hearts delight (but this does not bring us “much closer to an
understanding of how the processor works” (5)) and take every imaginable
measurement of both individual transistors and of the whole processor (but this
does not give “conclusive insight into the computation” (6)) and do full
connectivity diagrams and still we have little idea about how the circuit is
structured to do what it does. So, it’s not only the nematode that remains
opaque. Even a lowly circuit won’t give up its “secrets” no matter how much
data we gather using these techniques.
This is the negative result, and it is interesting. But
there is a positive observation that I want to draw your attention to as well.
J&K observes that many of their measures on the processor “are surprisingly
similar” to those made on brains. The cog-neuro techniques applied to the
transistors yield patterns that look remarkably like spike trains (5), look
“quite a bit like real brain signals” (6) and “produce results that are
surprisingly similar to the results found about real brains” (9). This is very
interesting. Why?
Well, there is a standard story in Brainville that
promotes the view that brains are entirely different from digital computers.
The J&K observation is that a clearly digital system will yield
“surprisingly similar” patterns of data if the same techniques are applied to
it as are applied to brains. This suggests that standard neuro evidence is
consistent with the conclusion that the brain is a standard computing device.
Or, more accurately, were it one, the
kind of data we in fact find is the kind of data that we should find. Thus, the
simple minded view that brains don’t compute the way that computers do, is, at
best motivated by very weak reasoning (IMO, the only real “argument” is that
they don’t look like computers).
Why mention this? Because, as you know, there are
extremely good reasons provided by Gallistel among others that the brain must
have a standard classical Turing architecture, though we have no current idea
how brains realize this. What J&K shows is that systems that clearly are
classical computational systems in this sense generate the same patterns of
data as brains do, which suggests, at the very least, that the conclusion that
brains are not classical computers requires much more argument than standardly
provided.
At any rate, take a look at J&K. It is a pretty quick
read. Both its negative and positive conclusions are interesting. It also
outlines a procedure that is incredibly useful: it always pays to test ones
methods on problems that we know the
answer to. If these methods don't deliver where we know the answer, then we
should be wary of over-interpreting results when applied to problems we know
almost nothing about.
I think there's potentially another interesting analogy here between neurons and transistors. A typical computer (say a smartphone) has a very large number of transistors functioning essentially as switches and a much smaller number functioning as amplifiers. Each individual transistor is a very complex thing in terms of its physical properties. However, it’s only for the tiny fraction of transistors functioning as amplifiers that any detailed understanding of their physical states is necessary for an understanding of the whole system. I wildly speculate that the same may hold true of brains. What you really need is probably a high-level simulation of 99% of what’s going on and a much more detailed simulation of the remaining 1%. But you can’t know in advance what that 1% is. In other words, you can’t simulate everything at the maximum required level of detail, and yet you can’t know where it’s ok to skimp on the details unless you already have a pretty good idea of how the brain works.
ReplyDelete