Ewan sent me a
note a couple of days ago discussing an interesting vision article. It seems
that the accepted wisdom is that the retina, all by itself, does a whole lot of
what we might think of as high level processing. It does this by incorporating
cells that “predict” what ought to be there. Ewan, rightly thought that
thinking about what our vision friends do would be useful for priming our
thoughts about how to approach linguistic phenomena. Impressionistically, some
of this stuff reminds me of some old discussions about instruction vs selection
theories of “learning” that played out in immunology in the mid 20th
century. Both Chomsky and Massimo
Piatelli Palmerini discussed the immunological history as models for how to
think about the structure of FL and how UG gets applied to acquiring Gs.[1]
Thx Ewan for bringing this to our attention.
*****
I dropped
Norbert a note about this review paper about what happens in
the retina that came
across my desk via the Twittersphere (thanks to DK). He suggested putting something on the
blog about it. As I told him, my initial excitement for this paper faded in the
best way possible as I read it (what I could understand of it). The abstract
had me expecting high-level descriptions of the functions of low level parts so
excitingly insightful as to be almost controversially Marrian, a swift jab at
the eyes of anyone who would bristle at asking “what (formally) does this thing
do in the abstract?” What it delivered in that regard was better, though,
namely, high-level descriptions of the functions of low level parts with so
much erudite lab bench detail that it winds up sounding self-evident that these
bits do that thing, and, by extension, self-evident that we should and indeed
must ask for different levels of description as a part of basic research. In
hindsight, I feel like the vision talks I have seen all have this sensible
flavor, so maybe this isn’t so surprising.
There is a bit
more going on here of interest to general cognitive science people. What they
actually thought would be stunning, I think, is the fact that the retina has
now been found to do a whole lot of fancy stuff---you know, the back of your
eye. Not just that, the back of your dumb little pet salamander’s eye---all by
its lonesome, no feedback. For example, apparently the retina detects object motion. Now, (a) this doesn't require first
recognizing any little object bits or in any way solving the problem of what
“objects” look like; what it really means is “non-background motion”, which
doesn’t require knowing anything about anything but motion; and (b) one of the
cues to which is “differential motion”. If I move my eye around (as you and
your salamander brethren are constantly and incessantly doing, whether you know
it or not) then everything is moving, naturally. But certain bits of the scene
are moving differently (i.e., in different directions). Those must be things. So
here’s a kind of prediction: backgrounds keep moving in the same direction.
Now, this isn’t too surprising a prediction given that the reason the
background is moving is that your eyes are moving, so one might imagine this
“prediction” actually relying on a signal from the motor system driving the eye
motion itself (although it actually doesn’t, if I understand their stuff right).
Here’s a more interesting kind of prediction: the object also keeps moving in
the same direction. Suppose I hide a fish in a moving checkerboard. By that, I
mean, there is a moving checkerboard background, and all of a sudden, there is
movement in a contrasting direction to the background in the shape of a fish.
Apparently, when the “object” stops moving, the motion detecting cells continue
to be sensitive at the NEXT expected location of the fish. As they correctly
point out, this retinal computation implements prior knowledge of inertia (in
fact, the bit about the background too, assuming I understand them right that
it doesn’t rely on the motor signal). As a bonus, I would expect that this
location information could indeed then go on to give other cells clues about
object shapes, although I don’t know if this really happens.
So, here’s a
cute polemic: see how easy it is to believe in things that aren't there? All
you need is prediction, and even a dumb old retina can do it (I think they do
this with salamanders or something). If my retina can hallucinate a whole fish
then why people cannot get behind my whole phonological system doing absolute
neutralization is beyond me. Here’s another one: some neuro-sentence processing
people ask whether certain “surprise” effects are due to predictions or to
low-level “pre-activation” of expected words. One needn’t be a psycholinguist
to understand this question, transposed to whatever domain. Question from the
retina gallery: what’s the difference? The high-level description of any such
circuitry is that it’s a “predictor” circuit.
Another general
lesson, this one from Norbert: inhibition plays a big role in isolating the
relevant factors. Here’s one: direction of motion. Something called a
“starburst amacrine cell” is responsible for direction-of-motion feature
detection (despite my typical early 1980s American Saturday morning advertising
childhood, this really calls to mind some kind of psychoactive sea creature
rather than any soft chewy candy). If I understand right, all the direction
feature detectors would actually be firing for motion in every direction all
the time were it not for these devious little creatures. The detector for
motion in direction x is wired to all
of the starburst cells BUT the ones sensitive to motion in direction x. Why? Because the starburst cells send
an INHIBITORY neurotransmitter. So, by gratuitous (and rather nutty-sounding)
use of negation, we construct a direction-x
detector. This kind of “negative sensitivity,” as Norbert points out, is sort
of like indirect negative evidence in language acquisition. To have indirect
negative evidence you need quite a bit of background, for it makes no sense to
look for what isn't there unless you have some idea of what should be there. So
given a fixed set of options, defined by the feature space, and a set of
expectations, you can use absence to provide useful information. Input
disposes, not proposes. Now, that’s more Norbert talking than me, as I am
notorious for squinting at the difference when it comes to language
acquisition. But surely it is a useful case to have kicking around to
scrutinize the difference, no? The idea being, I take it, that there is
something about wiring up ALL BUT x that
speaks to a greater level of “prior expectation” than if there was just a “pro-x” detector, one of zillions of positive
detectors for everything under the sun. I am inclined to squint because that is
the natural position of my eyes, but I can’t help but feel there’s something
there. What, I’m not exactly sure. Maybe it’s an interesting case to think
about…
[1] See, for example,
Massimo’s: "The
Rise of Selective Theories: a Case Study and Some Lessons from
Immunology". In: Language Learning and Concept Acquisition: Foundational
Issues, (A. Marras & W. Demopoulos Eds.), Ablex Publishing Co., Norwood,
NJ, pp.117-130 (1987).
As I am currently stuck in Montreal waiting for a flight delayed from
yesterday, I cannot give you a Chomsky reference. But if I recall correctly,
there is some discussion of this in Reflections.
No comments:
Post a Comment