One warning: some of the hype around the Poeppel & Co paper are reporting it as final vindication of Chomsky's views (see here). From a linguists point of view, it is rather the reverse: this is a vindication of the idea that standard neuro methods can be of utility in investigating human cog-neuro capacities. In fact, the title of the Poeppel & Co paper indicates that this is how they are thinking of it as well. However, the hype does in fact respond to a standing prejudice in the brain sciences and so advertising the results in this way makes some rhetorical sense. As the Medical Press release accurately notes:
Neuroscientists and psychologists predominantly reject this viewpoint, contending that our comprehension does not result from an internal grammar; rather, it is based on both statistical calculations between words and sound cues to structure. That is, we know from experience how sentences should be properly constructed—a reservoir of information we employ upon hearing words and phrases. Many linguists, in contrast, argue that hierarchical structure building is a central feature of language processing.And given this background, a little corrective hype might be forgivable. By the way, it is important to understand that the result is linguistically modest. It shows that hierarchical dependencies is something the brain tracks and that stats cannot be the explanation for the results discovered. It does not tell us what specific hierarchical structures are being observed and which linguistic structures the might point to. That said, take a look. the paper is sure to be important.
"This is an important addition to the growing number of papers that are beginning to use linguistic notions seriously to map brain behavior"
ReplyDeletePeople in the neuro world have been taking generative grammar seriously for a long time, and "linguistic notions" for a longer time still. In neuropsychology, for instance: Caramazza & Zurif, 1976 and Linebarger et al., 1983. In neuroimaging, pretty much since the methods started being used, people started investigating syntax. Take Mazoyer 1993 as an example (S. Dehaene a coauther, Jaques Mehler being the final author):
"Psycholinguistic models have tried to explain how the acoustic wave is mapped onto phonemes, how the sequence of phonemes makes contact with the lexicon, and how words are concatenated into phrases to which a meaning can be attributed. One may also ask, not only how these cognitive processes are organized and structured, but also how they are implemented in in the brain."
"To perceive and understand speech, one must deal with the acoustical, phonological, lexical, prosodic, syntactic, and conceptual information conveyed by the signal"
Or Stromswold et al. 1996 (David Caplan being the final author, who takes generative grammar quite seriously), the title of the paper being "Localization of Syntactic Comprehension by Positron Emission Tomography":
"The most specific result was that an increase in rCBF occurred in part of Broca’s area when the processing of two closely related sentence types—sentences with center-embedded and right-branching clauses—was compared. The results provide evidence supporting the role of a portion of Broca’s area in the assignment of syntactic structure in sentence comprehension, or in operations associated with this process."
With respect to this particular paper, I really don't see how this advances cog neuro of language theoretically, except that it provides a neat trick that one could exploit (which is a really great technological advance). If you look at the figures showing the electrodes that gave them their results, what you'll see looks pretty much just like the figures in Mazoyer et al. 1993.
In fact, if anything, I feel like the field is moving away from close alignment with linguistic theory. That is my impression from attending the Society for the Neurobiology of Language Conference, which includes many talks on neurobiology, and pretty much zero talks on language (no presentations by psycholinguists or linguists). I think this is lamentable, and I hope papers like this will help do something to turn it around.
Right, I should have said "returns to linguistic categories as cog-neuro relevant." I akso hope it will help turn things around.
Delete@william This isn't an area I really know much about, but I thought the importance of this paper is that it shows a physical response to specific aspects of abstract hierarchical structure. Previous papers seem to me to have shown that the brain responds differently to hierarchy vs non-hierarchy, but not to have shown a plausible mechanisms whereby the hierarchy is physically represented in the brain. I had a quick scan of the Mazoyer paper and they don't show that, do they? So it's in this sense that I think this paper is important - it's basically addressing the old thorny issue of the mental/neural reality of the specifics of abstract structure being put to use (cf Chomsky's Rules and Representations discussion). But I'm speaking from a position of ignorance, so perhaps that has already been shown.
Delete@ David
DeleteYou make a good point - they are at least discussing the problem. I just find the postulated mechanisms implausible, and I don’t think they follow from the data, although they are consistent with them.
Ding et al. were careful not to actually say that they found a way that “the hierarchy is physically represented in the brain”. They say that oscillations provide a potential mechanism that might assist syntactic analysis in language comprehension.
Oscillations are thorny things. They are data, and they could also be mechanisms.
An analogy that springs to mind comes from a paper from Chomsky where certain structures were rendered illicit through the assignment of an “ungrammaticality” index; he even used an asterisk for this purpose (I don’t remember which paper). These indices were then removed through subsequent stages of the derivation, but only for certain structures, and a filter excluded structures that had the index. Assignment of licit/illicit could be a mechanism of the grammar, but just because we get useful acceptability judgments doesn’t mean that it is – it has to be shown that this is the right analysis.
The analogy isn’t perfect, but I think it underscores the point - oscillations could be mechanisms, but observing them doesn’t mean that they are. Whether or not they are depends on a lot more than these data, particularly so for syntax. And oscillatory mechanisms of the kind espoused in the paper commit one to a certain view of how the brain works, which to me is a very un-Gallistel, Empiricist kind of view. i.e., how the brain works is that it receives an input and generates an oscillation that matches its expectation of the subsequent timing of external events. This type of thing might even exist and be useful for comprehension, but is this the kind of mechanism we want for syntax?
I see. Thanks. I guess a way to reconcile it with a Gallistellian view (wow, that adjective makes him sound like a Time Lord!) is that the oscillations could be a mechanism whereby the stored grammatical knowledge is put to use. I still think seeing something physical correlating with abstract structure is quite cool, even if it's just in use of language, as opposed to being the physical specification of the generative system itself.
DeleteThere's a huge literature - not consulted by linguists, and even most neurolinguists - on the functional role of oscillations. A lot of the stuff on working memory, for instance, could (and should) easily be translated into linguistic terms, and there's huge potential for exploring hierarchical processing here, as Poeppel's team has shown (not just in this paper, but in much previous work). I think the literature I recently reviewed here (http://journal.frontiersin.org/article/10.3389/fpsyg.2015.01515/abstract) really speaks against the view that oscillations are just reflexes of more fundamental processes. The hard task now is to marry the extensive cartographic and neuroimaging work with rhythm-based approaches.
DeleteWell, I am going to read your paper with interest!
DeleteActually there is a lot of excellent neurolinguistics out there. IMO, two of the best examples are Andrea Moro and Angela Friederici. The former is senior author of a pioneering paper 'Broca's area and the language instinct' (Musso, M. et al. Nat. Neurosci. 6, 774-781 (2003). Angela has published a string of excellent papers in which she and her collaborators explicitly investigate the neural mechanisms of human language syntactic structure. Just now she published a very important paper 'Merge in the human brain', freely dowloadable here:
ReplyDeletehttp://journal.frontiersin.org/article/10.3389/fpsyg.2015.01818/abstract
Johan Bolhuis
In the same vein, witness another FPsych paper by Elliot Murphy ("Labels, cognomes, and cyclic computation" http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4453271/), published this year. It definitely takes the aims/concerns/results of GG seriously. No idea if that indicates a trend in either direction, but it deserves recognition.
DeleteThe Musso et al. 2003 study was interesting. I see it as very similar to Ding et al. in its spirit - useful for converting unbelievers, and minimally informative to scientific questions.
DeleteAs far as Friederici's work is concerned, I think it's great that it takes generative grammar and syntax seriously. However, I don't see much in this body of literature besides "look - syntax and Broca's area!" in like 100 variations of the same experiment. There are some studies that provide informative data, but where is the theory? I just don't see it.
Do we want people to use the vocabulary of generative grammar, or do we want real progress in understanding how the notions of generative grammar connect with the brain, in a, say, explanatory adequacy kind of way? It's frustrating as a neurolinguist to see that the generative grammarians generally only get interested in neurolinguistics when it can be used to say "look we were right!". You were already right. Now let's try to understand how generative grammar can connect with brains.
As an example, I find Ullman’s work on the Declarative/Procedural model far more informative on how language might be neurally implemented than any of Friederici’s studies. This work doesn’t talk about endlessly about Merge and hierarchy, but it does mention these things. I think Ullman’s approach has its flaws, but overall, is eminently cogent:
A reasonable research program would thus be to identify domains that share commonalities with language… Importantly, if the systems underlying the target domains are well understood, they should yield clear predictions about language, based solely on non-language theories and data. This should provide far greater predictive power about language than research restricted to language, whose theories and predictions are generally if not always derived from evidence solely related to language itself… Importantly, the converse holds as well. That is, our understanding of many aspects of the representation, computation, and processing of language has progressed far beyond that of many other cognitive domains. So, the demonstration of neurocognitive links between language and other domains should also improve our understanding of the latter.
Why aren’t we talking about this sort of thing more, rather than the Dehaene work, the Ding work, the Friederici work?