I’m on the train leaving Nijmegen where David Poeppel just
finished giving three terrific lectures (slides here) on language and the
brain. The lectures were very well attended/received and generated lots of
discussion. Lecture 3 was especially animated. It was on the more general topic
of how to investigate the cognitive properties of the brain. For those
interested in reviewing the slides of the lectures (do it!), I would recommend starting
with the slides from the third as it provides a general setting for the other
two talks. Though I cannot hope to them justice, let me discuss them a bit,
starting here with 3.
Lecture 3 makes three important general points.
First it identifies two different kinds of cog-neuro
problems (btw, David (with Dave Embick) has made these important points before
(see here
and here
for links and discussion). The first is the Maps Problem (MP), the second the
Mapping Problem (MingP). MP is what most cog-neuro (CN) of language practice today
addresses. It asks a “where” question. It takes cognitively identified
processes and tries to correlate them with activity in various parts of the
brain (e.g. Broca’s does syntax and STS does speech). MingP is more ambitious.
It aims to answer the “how” question; how do brains do cognitive computation.
It does this by relating the primitives and causal processes identified in a
cognitive domain with brain primitives and operations that compute the
functions cognition identifies.
A small digression, as this last point seems to generate
misunderstanding in neuro types. As it goes without saying, let me say it: this
does not mean that neuro is the
subservient handmaiden of cognition (CNLers need not bow down to linguists
(especially syntacticians) though should you feel so moved, don’t let me stand
in your way). Of course there should
be plenty of “adjusting” of cog proposals on the basis of neuro insight. The
traffic is, and must be, two-way in
principle. However, at any given period of research some parts of the whole
story might require more adjusting than others as the what we know in some
areas might be better grounded than the what that we know in others. And right
now, IMO (and I am not here speaking for David) our understanding of how the brain
does many things we think important (e.g. how knowledge is represented in
brains) is (ahem) somewhat unclear. More honestly, IMO, we really know next to
nothing about how cognition gets neutrally realized. David’s excellent slide
3:22 on c-elegans illustrates the general gap between neural structure and
understanding of behavior/cognition. In this worm, despite knowing everything there is to know about the neurons,
their connectivity, and the genetics of c-elegans we still know next to nothing at all about what
it does, except, as David noted, how it poops (a point made by many others,
including Cristof Koch (see here)).
This is an important result for it belies the claim that we understand the
basics of how thought lives in brains but for the details for given all the
details we still don’t know squat about how and why the worm does what it does.
In short, by common agreement, there’s quite a lot we do not yet understand how
to understand. So, yes, it is a two way
street and we should aim to make the traffic patterns richly interactive (hope
for accidents?) but as a matter of fact at this moment in time it is unlikely
that neuro stuff can really speak with much authority to cog stuff (though see
below for some tantalizing possibilities).
Back to the main point. David rightly warned against
confusing where with how. He pointedly emphasized that saying where something
happens is not at all the same as explaining how it happens, though a plausible
and useful first step in addressing the how question is finding where in the
brain the how is happening. Who could disagree?
David trotted out Marr (and Aristotle and Tinbergen (slide 3:67))
to make his relevant conceptual distinctions/points richer and gave a great
illustration of a successful realization of the whole shebang.[1] I strongly recommend looking at this example
for it is a magnificent accessible
illustration of what success looks like in real cog-neuro life (see slides 3:24-32).
The case of interest involves localizing the position of something based on
auditory information. The particular protagonists are the barn owl and the
rodent. Barn owls hunt at night and use sounds to find prey. Rodents are out at
night and want to avoid being found. Part of the finding/hiding is to locate
where the protagonist is based on the sounds it makes. Question: how do they do
this?
Well, consider a Marr like decomposition of the problem each
faces. The computational problem is locating position of the “other” based on
auditory info. The algorithm exploits the fact that we gather such info through
the ears. Importantly, there are two of
them and they sit on opposite sides of the head (they are separated). This
means that sounds that are not directly ahead or behind come to each ear at different
rates. Calculating the differences locates the source of the sound. There is a
nice algorithm for this based on incident detectors and delay lines that an
engineer called Jeffres put together in 1948 and precisely the cognitive
circuits that support this algorithm were discovered in birds in 1990 by Carr
and Konishi. So, we have the problem (place location based on auditory signals
differentially hitting the two ears), we have the algorithm due to Jeffres and
we have the brain circuits that realize this algorithm due to Carr and Konishi.
So, for this problem we have it all. What more could you ask for?
Well, you might want to know if this is the only way to
solve the problem and the answer is no. The barn owl implements the algorithm
in one kind of circuit and, as it turns out, the rodent uses another. But they
both solve the same problem, albeit in slightly different ways. Wow!!! In this
case, then, we know everything we could really hope for. What is being done, what
computation is being executed and which brain circuits are doing it. Need I say
that this is less so in the language case?
Where are we in the latter? Well, IMO, we know quite a bit
about the problem being solved. Let me elaborate a bit.
In a speech situation someone utters a sentence. The
hearer’s problem is to break down the continuous wave form coming at her and
extract an interpretation. The minimum
required to do this is to segment the sound, identify the various phonetic
parts (e.g. phonemes) use these to access the relevant lexical entries (e.g.
morphemes), and assemble these morphemes to extract a meaning. We further know
that doing this requires a G with various moving and interacting parts
(phonetics, phonology, morphology, syntax, semantics). We know that the G will
have certain properties (e.g. generate recursive hierarchies). I also think
that we now know that human parsing routines extract these G features online
and very quickly. This we know because of the work carried out in the last 60
years. And for particular languages we have pretty good specifications of the
actual G rules that best generate the relevant mappings. So, we have a very good
description of the computational level problem and a pretty good idea of the representational vocabulary required
to “solve” the problem and some of
the ways that these representations are deployed in real time algorithms. What
we don’t know a lot about are the wetware that realize these representations or
the circuits that subserve the computations of these algorithms. Though we do
have some idea of where these
computations are conducted (or at least places whose activity correlates with
this information processing). Not bad, but no barn owl or rodent. What are the
current most important obstacles to progress?
In lecture 3, David identifies two bottlenecks. First, we
don’t really have plausible “parts-lists” of the relevant primitives and
operations in the G or the brain domain for language. Second, we are making our
inquiry harder by not pursuing radical decomposition in cog-neuro. A word about
each.
David has before discussed the parts list problem under the
dual heading of the granularity mismatch problem (GMP) and the ontological
incommensurability problem (OIP). He
does so again. In my comments on lecture 3 after David’s lecture (see here),
I noted that one of the nice features of Minimalism is that it is trying to
address GMP. In particular, if it is
possible to actually derive the complexity of Gs as described by GG in, say,
its GB form, in terms of much simpler operations like Merge and natural
concepts like Extension then we will have identified a plausible set of basic
operations that it would make sense to look for neural analogues of (circuits
that track merge say (as Pallier et. al. and Freiderici and her group have been
trying to do)). So Minimalism is trying to get a neurally useful “parts list”
of specifically linguistic primitive
operations that it is reasonable to hope that (relatively transparent) parsers
use in analyzing sentences in real time.
However, it is (or IMO, should be) trying to do more. The Minimalist
conceit is the idea that Gs only use a small number of linguistically special
operations, and that most of FL (and the Gs they produce) use cognitively off
the shelf elements. What kind? Well operations like feature checking. The
features language tracks may be different but the operations for tracking them
are cognitively general. IMO, this also holds true of a primitive operation of
putting two things together that is an essential part of Merge. At any rate, if
this is right, then a nice cog-neuro payoff if Minimalism is on the right track
is that it is possible to study some of these primitive operations that apply
within FL in other animals. We have seen this being seriously considered for
sound where it has been proposed that birds are model species for studying the
human sound system (see here
for discussion). Well if Merge involves the put-together operation and this
operation exists in non-human animals then we can partially study Merge by
looking at what they and their brains do. That’s the idea and that’s how
contemporary linguistics might be useful to modern cog-neuro.
BTW, there is something amusing about this if it is true. In
my experience, cog-neuro of language types hate Minimalism. You know, too
abstract, not languagy enough, not contextually situated, etc. But, if what I
say above is on the right track, then this is exactly what should make it so
appealing, which brings me to David’s second bottleneck.
As David noted (with the hope of being provocative (and it
was)), there is lots to be said for radically ignoring lots of what goes in when we actually process speech.
There is nothing wrong with ignoring intonation, statistics, speaker
intentions, turn-tacking behavior and much much more. In fact, science
progresses by ignoring most things and trying to decompose a problem into its
interacting sub-parts and then putting them back together again. This last
step, even when one has the sub-parts and the operations that manipulate them
is almost always extremely complicated. Interaction effects are a huge pain,
even in domains where most everything is known (think turbulence). However,
this is how progress is made, by ignoring most of what you “see” and exploring
causal structures in non-naturalistic settings. David urges us to remember this
and implement it within the cog-neuro of language. So screw context and its
complexities! Focus in on what we might call the cog-neuro of the ideal speaker
hearer. I could not agree more, so I won’t bore you with my enthusiasm.
I’ve focused here on lecture 3. I will post a couple of
remarks on the other two soon. But do yourself a favor and take a look. They
were great, the greatest thing being the reasonable, hopeful ambition they are
brimming with.
[1]
Bill Idsardi has long encouraged me to look at this case in detail for it
perfectly embodied the Marrian ideal. He was right. It’s a great case.
No comments:
Post a Comment