For
those of you who may be in Nijmegen January 21, 22, 23, David Poeppel is going
to give three terrific sounding lectures. Here are the abstracts for the three.
David is one of those doing God’s work in cog-neuro of language in that he is
trying to find how the stuff linguists have discovered live in brains.
Moreover, unlike many he understands the difference between where and how (see
lecture 3). I can think of no better way of spending three days in January,
thoug, truth be told, why couldn’t these have been given in Rome or Barcelona?
Oh well. Every silver lining has a cloud.
David
Poeppel - Nijmegen Lectures
Lecture
Series Title:
(Un)conventional
wisdom: Three neurobiological provocations about brain and language
The
lectures discuss recent experimental studies that focus on general questions
about the cognitive science and neural implementation of speech and language.
On the basis of the empirical findings, I reach (currently) unpopular
conclusions, namely that speech is special (not just ‘mere’ hearing), that
language is structured (not just ‘mere statistics’), and that linguistic
theorizing of an appropriately abstract computational form will underpin proper
explanation.
Lecture
1: On how speech is pretty special
In
this presentation, I consider the notion of specialization for sounds, and
especially speech. Speech contains temporal structure that the brain must
analyze to enable linguistic processing. To investigate the neural basis of
this analysis, we used sound quilts – stimuli constructed by shuffling segments
of a natural sound, approximately preserving its properties at short timescales
while disrupting them at longer scales. We generated quilts from foreign
speech, to eliminate language cues, and we manipulated the extent of natural
acoustic structure by varying the segment length. Using fMRI, we identified
bilateral regions of the superior temporal sulcus (STS) whose responses varied
with segment length. This effect was absent in primary auditory cortex and did
not occur for quilts made from other natural sounds, or acoustically-matched
synthetic sounds, suggesting tuning to speech-specific spectrotemporal
structure. When examined parametrically, the STS response increased with
segment length up to ~500 ms. The results identify a locus of speech analysis
in human auditory cortex, distinct from lexical, semantic, or syntactic
processes.
Lecture
2: On the sufficiency of abstract structure
The
most critical attribute of human language is its unbounded combinatorial
nature: smaller elements can be combined into larger structures based on a
grammatical system, resulting in a hierarchy of linguistic units, e.g., words,
phrases, and sentences. Mentally parsing and representing such structures,
however, poses challenges for speech comprehension. In speech, hierarchical
linguistic structures do not have boundaries clearly defined by acoustic cues
and must therefore be internally and incrementally constructed during
comprehension. Previous studies have suggested that the cortical activity is
synchronized to acoustic features of speech, approximately at the syllabic
rate, providing an initial time scale for speech processing. But how the brain
utilizes such syllabic-level phonological representations closely aligned with
the physical input to build multiple levels of abstract linguistic structure,
and represent these concurrently, is not known. On the basis of MEG
experimentation, I demonstrate that during listening to connected speech,
cortical activity of different time scales concurrently tracks the time course
of abstract linguistic structures at different hierarchical levels, e.g. words,
phrases, and sentences. Critically, the oscillatory neural tracking of
hierarchical linguistic structures is dissociated from the encoding of acoustic
cues as well as from the predictability of incoming words. The results suggest
that a hierarchy of neural processing timescales underlies grammar-based
internal construction of hierarchical linguistic structure.
Lecture
3: On the insufficiency of correlational cognitive neuroscience
We
consider here two inter-related problems that current research attempts to - or
should attempt to - solve. The first challenge concerns how to develop a
theoretically well- motivated and biologically sophisticated functional anatomy
of the language processing system. This "maps problem” is by and large a
practical issue. Much as is true for vision, language research needs fine-grained
maps of the regions that underpin the domain; which techniques can be harnessed
to build an articulated model (in light of having no animal models) remains
difficult. The second, closely related challenge concerns the "parts
list" (or the set of primitives; or the ontology) for language actually
under consideration. Coarse conceptions (such as the original “production"
versus “comprehension") are completely insufficient and incoherent.
Current ideas, such as phonology versus syntax versus semantics are also unlikely
to provide a plausible link to neurobiological infrastructure. This
"mapping problem” constitutes a more difficult, principled challenge: what
is the appropriate level of analysis and granularity that allows us to map
between (or align) the biological hardware and the computational requirements
of language processing? The first challenge, the maps problem, addresses how to
break down linguistic computation in space. The second challenge, the mapping
problem, addresses how to break down language function into computational
primitives suitable for neurobiology. If these problems are not explicitly
tackled, our answers to ‘brain and language’ may remain correlational, not
mechanistic and explanatory.
"On the basis of the empirical findings, I reach (currently) unpopular conclusions…that language is structured (not just ‘mere statistics’)”
ReplyDeleteDoesn't this sound a lot like Frankland and Greene to you?