Gary Marcus here
discusses a recent brouhaha taking place in the European neuro-science
community. The kerfuffle, not surprisingly, is about how to study the brain. In other words, it's about money. The Europeans have decided
to spend a lot of Euros (real money!) to try to find out how brains function. Rather than
throw lots of it at many different projects haphazardly and see which gain traction, the science bureaucrats in the EU have decided to pick winners (an
unlikely strategy for success given how little we know, but bureaucratic hubris
really knows no bounds). And, here’s a surprise, many of those left behind are
complaining.
Now, truth be told, in this case my sympathies lie with
(at least some) of those cut out. One of
these is Stan Dehaene, who, IMO, is really one of the best cog-neuro people
working today. What makes him good is
his understanding that good neuroscience requires good cognitive science (i.e.
that trying to figure out how brains do things requires having some
specification of what it is that they are doing). It seems that this,
unfortunately, is a minority opinion. And this is not good. Marcus explains
why.
His op-ed makes several important points concerning the
current state of the neuro art in addition to providing links to aforementioned
funding battle (I admit it: I can’t help enjoy watching others fighting
important “intellectual battles” that revolve around very large amounts of
cash). His most important point is that, at this point in time, we really have
no bridge between cognitive theories and neuro theories. Or as Marcus puts it:
What we are really
looking for is a bridge, some way of connecting two separate scientific
languages — those of neuroscience and psychology.
In fact, this is a nice and polite way of putting it. What
we are really looking for is some recognition from the hard-core neuro community
that their default psychological theories are deeply inadequate. You see, much
of the neuro community consists of crude (as if there were another kind)
associationists, and the neuro models they pursue reflect this. I have pointed
to several critical discussions of this shortcoming in the past by Randy
Gallistel and friends (here). Marcus himself
has usefully trashed the standard connectionist psycho models (here). However, they just refuse to die and this has had the
effect of diverting attention from the important problem that Marcus points to
above; finding that bridge.
Actually, it’s worse than
that. I doubt that Marcus’s point of view is widely shared in the neuro
community. Why? They think that they already have the required bridge. Gallistel
& King (here) review the current state of play: connectionist neural
models combine with associationist psychology to provide a unified picture of
how brains and minds interact. The
problem is not that neuroscience has no bridge, it’s that it has one and it’s a
bridge to nowhere. That’s the real problem. You can’t find what you are not
looking for and you won’t look for something if you think you already have it.
And this brings us back to the
aforementioned battle in Europe. Markham
and colleagues have a project. It is
described here as attempting to “reverse engineer the mammalian brain
by recreating the behavior of billions of neurons in a computer.” The game plan
seems to be to mimic the behavior of real brains by building a fully connected
brain within the computer. The idea seems to be that once we have this fully
connected neural net of billions of “neurons” it will become evident how brains
think and perceive. In other words, Markham and colleagues “know” how brains
think, it’s just a big neural net.[1] What’s missing is not the basic
concepts, but the details. From their point of view the problems is roughly to
detail the fine structure of the net (i.e. what’s connected to what). This is a
very complex problem for brains are very complicated
nets. However, nets they are. And once you buy this, then the problem of
understanding the brain becomes, as Science
put it (in the July 11/2014 issue), “an information technology” issue.[2]
And that’s where Marcus and
Dehaene and Gallistel and a few notable others disagree: they think that we
still don’t know the most basic features of how the brain processes
information. We don’t know how it stores info in memory, how it retrieves it
from memory, how it calls functions, how it binds variables, how, in a word, it
computes. And this is a very big thing not to know. It means that we don’t know
how brains incarnate even the most basic computational operations.
In the op-ed, Marcus develops
an analogy that Gallistel is also fond of pointing to between the state of
current neuroscience and biology before Watson and Crick.[3] Here’s Marcus on the cognition-neuro bridge
again:
Such bridges don’t come easily or often,
maybe once in a generation, but when they do arrive, they can change
everything. An example is the discovery of DNA, which allowed us to understand
how genetic information could be represented and replicated in a physical
structure. In one stroke, this bridge transformed biology from a mystery —
in which the physical basis of life was almost entirely unknown — into a
tractable if challenging set of problems, such as sequencing genes, working out
the proteins that they encode and discerning the circumstances that govern their
distribution in the body.
Neuroscience awaits a
similar breakthrough. We know that there must be some lawful relation between
assemblies of neurons and the elements of thought, but we are currently at a
loss to describe those laws. We don’t know, for example, whether our memories
for individual words inhere in individual neurons or in sets of neurons, or in
what way sets of neurons might underwrite our memories for words, if in fact
they do.
The presence of money (indeed, even the whiff of lucre) has a
way of sharpening intellectual disputes. This one is no different. The problem
from my point of view is that the wrong ideas appear to be cashing in. Those
controlling the resources do not seem (as Marcus puts it) “devoted to spanning
the chasm.” I am pretty sure I know why too: they don’t see one. If your
psychology is associationist (even if only tacitly so), then the problem is one
of detail not principle. The problem is getting the wiring diagram right (it is very complex you know), the problem
is getting the right probes to reveal the detailed connections to reveal the
full networks. The problem is not fundamental but practical; problems that we can
be confident will advance if we throw lots of money at them.
And, as always, things are
worse than this. Big money calls forth busy bureaucrats whose job it is to measure progress, write reports, convene panels to manage the money and the
science. The basic problem is that fundamental
science is impossible to manage due to its inherent unpredictability (as Popper noted long ago). So in
place of basic fundamental research, big money begets big science which begets
the strategic pursuit of the manageable. This is not always a bad thing. When questions are crisp and we understand roughly what's going on big science can find us the Higgs field or W bosons. However, when we are awaiting our "breakthrough" the virtues of this kind of research are far more debatable. Why? Because in this process, sadly, the hard
fundamental questions can easily get lost for they are too hard (quirky, offbeat, novel) for the system to digest.
Even more sadly, this kind of big money science follows a Gresham’s Law sort of logic with Big (heavily monied) Science driving out small bore fundamental
research. That’s what Marcus is pointing to, and he is right to be
disappointed.
[1]
I don’t understand why the failure of the full wiring diagram of the nematode
(which we have) to explain nematode behavior has not impressed so many of the
leading figures in the field (Cristof Koch is an exception here). If the problem were just the details of the
wiring diagram, then the nematode “cognition” should be an open book, which it
is most definitely not.
[2]
And these large scale technology/Big Data projects are a bureaucrats dream.
Here there is lots of room to manage the project, set up indices of progress
and success and do all the pointless things that bureaucrats love to do. Sadly,
this has nothing to do with real science.
Popper noted long ago that the problem with scientific progress is that
it is inherently unpredictable. You cannot schedule the arrival of breakthrough
ideas. But this very unpredictability is
what makes such research unpalatable to science managers and why it is that
they prefer big all encompassing sciency projects to the real thing.
[3]
Gallistel has made an interesting observation about this earlier period in
molecular biology. Most of the biochemistry predating Watson and Crick has been
thrown away. The genetics that predates
Watson and Crick has largely survived although elaborated. The analogy in the cognitive neurosciences is
that much of what we think of as cutting edge neuroscience might possibly
disappear once Marcus’s bridge is built. Cognitive theory, however, will
largely remain intact. So, curiously, if
the prior developments in molecular biology are any guide, the cognitive
results in areas like linguistics, vision, face recognition etc. will prove to
be far more robust when insight finally arrives than the stuff that most
neuroscientists are currently invested in.
For a nice discussion of this earlier period in molecular biology read this.
It’s a terrific book.
I just wanted to point out that it doesn't seem like Stanislas Dehaene is a signatory of the letter, but Ghislaine Dehaene-Lambertz is.
ReplyDeleteAgreed entirely. And it can definitely be exasperating.
Sorry for the necro, but I just read a related piece that I found rather enlightening regarding how neuroscientists think of the connection between neural and symbolic computation.
ReplyDeleteThe claim is that there is no bridge to be discovered between the two, not just because of complexity but because some non-neural concepts will be cultural or folk concepts – think vicarious embarrassment or nostalgia. The implicit message is "neuroscience is fine, and if we can't link it to theories of cognition, that's a problem with the bogus notions of cognition these theories employ". I have a feeling most of mainstream linguistics would end up in the bogus group.