Thursday, November 30, 2017
Jerry Fodor died yesterday
Jerry Fodor was one of the great analytic philosophers of his era, and that is saying a lot. Contemporaries include, Hilary Putnam, Robert Nozick, David Lewis, and Saul Kripke. In my opinion, his work will be profitably read for a very long time, and with constant amusement and provocation. Fodor was unique. He never avoided the deep issues, never avoided a good joke or trenchant jab. He always saw to the nub of the matter. And he was a philosopher whose work mattered for the practice of science. He made innumerable contributions to cognitive psychology (most of whose points psychologists would profit from reading still) and linguistics (not the least of which is insisting that we never confuse metaphysical issues (what is the case) with epistemological ones (how do we know that what is the case is the case)). He led the good fight against Empiricism (started early and never relented) in all it guises (behaviorism, connectionism, radical versions of natural selection) and his papers are still worth reading again and again and again today. He will be missed. Philosophy will be both duller and less illuminating without his constant contributions.
Friday, November 24, 2017
Repost (from Talking Brains) of a report from the neuro front by William Matchin
Here is a terrific post William Matchin first posted on Talking Brains reviewing some of the highlights of the two big recent cog-neuro meetings. He has kindly allowed me to repost it here for FoLers. FWIW, I find his take on this both heartening and consistent with my own impressions. I do think that there is a kind of appreciation dawning in the cog-neuro of lang community of the utility of the kinds of abstract considerations concerning "competence" that GGers have advocated. At any rate, it is one of those long pieces that you regret are not even longer.
+++++++++++++++++++++++++++++++++++++
Abstractness, innateness, and modality-independence of language: reflections on SNL & SfN 2017
Guest post by former student, William Matchin:
It’s been almost 10 years since the Society for the Neurobiology of Language conference (SNL) began, and it is always one of my favorite events of the year, where I catch up with old friends and see and discuss much of the research that interests me in a compact form. This year’s meeting was no exception. The opening night talk about dolphin communication by Diana Reiss was fun and interesting, and the reception at the Baltimore aquarium was spectacular and well organized. I was impressed with the high quality of many of the talks and posters. This year’s conference was particularly interesting to me in terms of the major trending ideas that were circulating at the conference (particularly the keynote lectures by Yoshua Bengio & Edward Chang), so I thought I would write some of my impressions down and hear what others think. I also have some thoughts about Society for Neuroscience (SfN), in particular one keynote lecture: Erich Jarvis, who discussed the evolution of language, with the major claim that human language is continuous with vocal learning in non-human organisms. Paško Rakić, who gave a history of his research in neuroscience, also had an interesting comment on the tradeoff between empirical research and theoretical development and speculation, which I will also discuss briefly.
The notions of abstractness, innateness, and modality-independence of language loomed large at both conferences; much of this post is devoted to these issues. The number of times that I heard a neuroscientist or computer scientist make a logical point that reminded me of Generative Grammar was shocking. In all, I had an awesome conference season, one that gives me great hope and anticipation for the future of our field, including much closer interaction between biologists & linguists. I encourage you to visit the Faculty of Language blog, which often discusses similar issues, mostly in the context of psychology and linguistics.
1. Abstractness & combinatoriality in the brain
Much of the work at the conference this year touched on some very interesting topics, ones that linguists have been addressing for a long time. It seemed that for a while embodied cognition and the motor theory of speech perception were dominant topics, but now it seems as though the table has turned. There were many presentations showing how the brain processes information and converts raw sensory signals into abstract representations. For instance, Neal Fox presented ECoG data on a speech perception task, illustrating that particular electrodes in the superior temporal gyrus (STG) dynamically encode voice onset time as well as categorical voicing perception. Then there was Edward Chang’s talk. I should think that everyone at SNL this year would agree that his talk was masterful. He clearly illustrated how distinct locations in STG have responses to speech that are abstract and combinatorial. The results regarding prosody were quite novel to me, and nicely illustrate the abstract and combinatorial properties of the STG, so I shall review them briefly here.
Prosodic contours can be dramatically different in frequency space for different speakers and utterances, yet they share an underlying abstract structure (for instance, rising question intonation at the end of a sentence). It appears that certain portions of the STG are selectively interested in particular prosodic contours independently of the particular sentence or speaker; i.e., they encode abstract prosodic information. How can a brain region encode information about prosodic contour independently of speaker identity? The frequency range of speech among speakers can vary quite dramatically, such that the entire range for one speaker (say, a female) can be completely non-overlapping with another speaker (say, a male) in frequency space. This means that the prosodic contour cannot be defined physically, but must be converted into some kind of psychological (abstract) space. Chang reviewed literature suggesting that speakers normalize pitch information by the speaker’s fundamental frequency, thus resulting in an abstract pitch contour that is independent of speaker identity. This is similar to work by Phil Monahan and colleagues (Monahan & Idsardi, 2010) who showed that vowel normalization can be obtained by dividing F1 and F2 by F3.
From Tang, Hamilton & Chang (2017). Different speakers can have dramatically different absolute frequency ranges, posing a problem for how common underlying prosodic contours (e.g., a Question contour) can be identified independently of speaker identity.
Chang showed that the STG also encodes abstract responses to speaker identity (the same response regardless of the particular sentence or prosodic contour) and phonetic features (the same response to a particular sentence regardless of speaker identity or pitch contour). Thus, it is not the case that there are some features that are abstract and others are not; it seems that all of the relevant features are abstract.
From Tang, Hamilton & Chang (2017). Column 1 shows the responses for a prosody-encoding electrode. The electrode distinguishes among different prosodic contours, but not different sentences (i.e., different phonetic representations) or speakers.
Why do I care about this so much? Because linguists (among other cognitive scientists) have been talking for decades about abstract representations, and I think that there has often been skepticism placed about how the brain could encode abstractness. But the new work in ECoG by Chang and others illustrates that much of the organization of the speech cortex centers around abstraction – in other words, it seems that abstraction is the thing the brain cares most about, doing so rapidly and robustly in sensory cortex.
Two last points. First, Edward also showed that any of these properties identified in the left STG are also found in the right STG, consistent with the claim that speech perception is bilateral rather than unilateral (Hickok & Poeppel, 2000). Thus, it does not seem that speech perception is the key to language laterality in humans (but maybe syntax – see section 3). Second, the two of us also had a nice chat about what his results mean for innateness and development of these functional properties of the STG. And he had the opinion that the STG innately encodes these mechanisms, and that different languages make different use of this pre-existing phonetic toolbox. This brings me to the next topic, which centers on the issue of what is innate about language.
2. Deep learning and poverty of the stimulus
Yoshua Bengio gave one of the keynote lectures at this year’s SNL. For the uninitiated (such as myself), Yoshua Bengio is one of the leading figures in the field of deep learning. He stayed the course during the dark ages of connectionist neural network modeling, thinking that there would eventually be a breakthrough (he was right). Deep learning is the next phase of connectionist neural network modeling, centered on the use of massive amounts of training data and hidden network layers. Such computer models can correctly generate descriptions of pictures, translate between languages; in sum, things for which people are willing to pay money. Given this background, I expected to hear him say something like this in his keynote address: deep learning is awesome, we can do all the things that we hoped to be able to do in the past, Chomsky is wrong about humans requiring innate knowledge of language.
Instead, Bengio made a poverty of the stimulus argument (POS) in favor of Universal Grammar (UG). Not in those words. But the logic was identical.
For those unfamiliar with POS, the logic is that human knowledge, for instance language, is underdetermined by the input. Question: You never hear ungrammatical sentences (such as *who did you see Mary and _), so how do you know that they are ungrammatical? Answer: Your mind innately contains the relevant knowledge to make these discriminations (such as a principle like Subjacency), making learning them unnecessary. POS arguments are central to generative grammar, as they provide much of the motivation for a theory of UG, UG being whatever is in encoded in your genome that enables you to acquire a language, and what is lacking in things that do not learn language (such as kittens and rocks). I will not belabor the point here, and there are many accessible articles on the Faculty of Language blog that discuss these issues in great detail.
What is interesting to me is that Bengio made a strong POS argument perhaps without realizing that he was following Chomsky’s logic almost to the letter. Bengio’s main point was that while deep learning has had a lot of successes, such computer models make strange mistakes that children would never make. For instance, the model would name a picture of an animal correctly on one trial, but with an extremely subtle change to the stimulus on the next trial (a change imperceptible to humans), the model might make a wildly wrong answer. This is directly analogous to Chomsky’s point that children never make certain errors, such as formulating grammatical rules that use linear rather than structural representations (see Berwick et al., 2011 for discussion). Bengio extended this argument, adding that children have access to dramatically less data than deep learning computer models do, which shows that the issue is not the amount or quality of data (very similar to arguments made repeatedly by Chomsky, for instance, this interview from 1977). For these reasons, Bengio suggested the following solution: build in some innate knowledge that guides the model to the correct generalizations. In other words, he made a strong POS argument for the existence of UG. I nearly fell out of my seat.
People often misinterpret what UG means. The claim really boils down to the fact that humans have some innate capacity for language that other things do not have. It seems that everyone, even leading figures in connectionist deep learning, can agree on this point. It only gets interesting when figuring out the details, which often include specific POS arguments. And in order to determine the details about what kinds of innate knowledge should be encoded in genomes and brains, and how, it would certainly be helpful to invite some linguists to the party (see part 5).
3. What is the phenotype of language? The importance of modality-independence to discussions of biology and evolution.
The central question that Erich Jarvis addressed during his Presidential Address at this year’s SfN on its opening night was whether human language is an elaborate form of vocal learning seen in other animals or rather a horse of a different color altogether. Jarvis is an expert of the biology of birdsong, and he argued that human language is continuous with vocal learning in non-human organisms both genetically and neurobiologically. He presented a wide array of evidence to support his claim, mostly along the lines of showing how the genes and parts of the brain that do vocal learning in other animals have closely related correlates in humans. However, there are three main challenges to a continuity hypothesis that were either entirely omitted or extravagantly minimized: syntax, semantics, and sign language. It is remiss to discuss biology and evolution of a trait without clearly specifying the key phenotypic properties of that trait, which for human language includes the ability to generate an unbounded array of hierarchical expressions that have both a meaning and a sensory-motor expression, which can be auditory-vocal or visual-manual (and perhaps even tactile, Carol Chomsky, 1986). If somebody had only the modest aim of discussing the evolution of vocal learning, I would understand omitting these topics. But Jarvis clearly had the aim of discussing language more broadly, and his second slide included a figure by Hauser Chomsky & Fitch (2002), which served as the bull’s-eye for his arguments. Consider the following a short response to his talk, elaborating on why it is important to discuss the important phenotypic traits of syntax, semantics, and modality-independence.
It is a cliché that sentences are not simply sequences of words, but rather hierarchical structures. Hierarchical structure was a central component of Hauser, Chomsky & Fitch’s (2002) proposal that syntax may be the only component of human language that is specific to it, as part of the general Minimalist approach to try and reduce UG to a conceptual minimum (note that Bengio, Jarvis and Chomsky all agree on this point – none of them want to have a rich, linguistically-specific UG, and all of them argue against it). Jarvis is not an expert on birdsong syntax, so it is perhaps unfair of him to discuss syntax in detail. However, Jarvis merely mentioned that some have claimed to identify recursion in birdsong (Gentner et al., 2006), feeling that to be sufficient to dispatch syntax. However, he did not mention the work debating this issue (Berwick et al., 2012), which illustrates that birdsong has syntax that is roughly equivalent to phonology, but not human sentence-level syntax. This work suggests that birdsong may be quite relevant to human language as a precursor system to human phonology (fascinating if true), but it does not appear capable of accounting for sentence-level syntax. In addition, the main interesting thing about syntax is that it combines words to produce new meanings, unlike birdsong, which does not.
With respect to semantics, Jarvis showed that dogs can learn to respond to our commands, such as sitting when we say “sit”. He suggested that because dogs can “comprehend” human speech, they have a precursor to human semantics. But natural language semantics is way more than this. We combine words that denote concepts into sentences which denote events (Parsons, 1990). We do not have very good models of animal semantics, but a stimulus-response pairing is probably a poor one. It may very well be true that non-human primates have a similar semantic system as we do – desirable from a Minimalist point of view – but it needs to be explored beyond pointing out that animals learn responses to stimuli. Many organisms learn stimulus response pairing, probably including insects – do we want to claim that they have a similar semantic system as us?
The most important issue for me was sign language. I do not think Jarvis mentioned sign language once during the entire talk (I believe he briefly mentioned gestures in non-human animals). As somebody who works on the neurobiology of American Sign Language (ASL), this was extraordinarily frustrating (I cannot imagine the reaction of my Deaf colleagues). I believe that one of the most significant observations about human language is that it is modality-independent. As linguists have repeatedly shown, all of the relevant properties of linguistic organization found in spoken languages are found in sign languages: phonology, morphology, syntax, semantics (Sandler & Lillo-Martin, 2006). Deaf children raised by deaf parents learn sign language in the same way that hearing children spoken language, without instruction, including a babbling stage (Petitto & Martentette, 1991). Sign languages show syntactic priming just like spoken languages (Hall et al., 2015). Aphasia is similarly left-lateralized in sign and spoken languages (Hickok et al., 1996), and neuroimaging studies show that sign and spoken language activate the same brain areas when sensory-motor differences are factored out (Leonard et al., 2012; Matchin et al., 2017a). For instance, in the Mayberry and Halgren labs at UCSD we showed using fMRI that left hemisphere language areas in the superior temporal sulcus (aSTS and pSTS) show a correlation between constituent structure size and brain activation in deaf native signers of ASL (6W: six word lists; 2S: sequences of three two word phrases; 6S: six word sentences) (Matchin et al., 2017a). When I overlap these effects with similar structural contrasts in English (Matchin et al., 2017b) or French (Pallier et al., 2011), there is almost perfect overlap in the STS. Thus, both signed and spoken languages involve a left-lateralized combinatorial response to structured sentences in the STS. This consistent with reports of a human-unique hemispheric asymmetry in the morphology of the STS (Leroy et al., 2015).
TOP: Matchin et al., in prep (ASL). BOTTOM: Pallier et al., 2011 (French).
Leonard et al. (2012), also from the Mayberry and Halgren labs, show that semantically modulated activity in MEG for auditory speech and sign language activates pSTS is nearly identical in space and time.
All of these observations tell us that there is nothing important about language that must be expressed in the auditory-vocal modality. In fact, it is conceptually possible to imagine that in an alternate universe, humans predominantly communicate through sign languages, and blind communities sometimes develop strange “spoken languages” in order to communicate with each other. Modality-independence has enormous ramifications for our understanding of the evolution of language, as Chomsky has repeatedly noted (Berwick & Chomsky, 2015; this talk, starting at 3:00). In order to make the argument that human language is continuous with vocal learning in other animals, sign language must be satisfactorily accounted for, and it’s not clear to me how it can. This has social ramifications too. Deaf people still struggle for appropriate educational and healthcare resources, which I think stems in large part from ignorance about how sign languages are fully equivalent to spoken languages among the scientific and medical community.
When I tweeted at Jarvis pointing out the issues I saw with his talk, he responded skeptically:
At my invitation, he stopped by our poster, and we discussed our neuroimaging research on ASL. He appears to be shifting his opinion:
This reaffirms to me how important sign language is to our understanding of language in general, and how friendly debate is useful to make progress in understanding scientific problems. I greatly appreciate that Erich took the time to politely respond to my questions, come to our poster, and discuss the issues.
If you are interested in learning more about some of the issues facing the Deaf community in the United States, please visit Marla Hatrak’s blog: http://mhatrak.blogspot.com/, or Gallaudet University’s Deaf Education resources: http://www3.gallaudet.edu/clerc-center/info-to-go/deaf-education.html.
4. Speculative science
Paško Rakić is a famous neuroscientist, and his keynote lecture at SfN gave a history of his work throughout the last several decades. I will only give one observation about the content of his work: he thinks that it is necessary to posit innate mechanisms when trying to understand the development of the nervous system. One of his major findings is that cortical maps are not emergent, but rather are derived from precursor “protomaps” that encode the topographical organization that ends up on the cortical surface (Rakić, 1988). Again, it seems as though some of the most serious and groundbreaking neuroscientists, both old and new, are thoroughly comfortable discussing innate and abstract properties of the nervous system, which means that Generative Grammar is in good company.
Rakić also made an interesting commentary on the current sociological state of affairs in the sciences. He discussed a previous researcher (I believe from the late 1800s) who performed purely qualitative work speculating about how certain properties of the nervous system developed. He said that this research, serving as a foundation for his own work, probably would be rejected today because it would be seen as too “speculative”. He mentioned how the term speculative used to be perceived as a compliment, as it meant that the researcher went usefully beyond the data, thinking about how the world is organized and developing a theory that would make predictions for future research (he had a personal example of this, in that he predicted the existence of a particular molecule that he didn’t discover for 35 years).
This comment resonated with me. I am always puzzled about the lack of interest in theory and the extreme interest in data collection and analysis: if science isn’t about theory, also known as understanding the world, then what is it about? I get the feeling that people are afraid to postulate theories because they are afraid to be wrong. But every scientific theory that has ever been proposed is wrong, or will eventually be shown to be wrong, at least with respect to certain details. The point of a theory is not to be right, it’s to be right enough. Then it can provide some insight into how the world works which serves as a guide to future empirical work. Theory is a problem when it becomes misguiding dogma; we shouldn’t be afraid of proposing, criticizing, and modifying or replacing theories.
The best way to do this is to have debates that are civil but vigorous. My interaction with Erich Jarvis regarding sign language is a good example of this. One of the things I greatly missed about this year’s SNL was the debate. I enjoy these debates, because they provide the best opportunity to critically assess a theory by finding a person with a different perspective who we can count on to find all of the evidence against a theory, saving us the initial work of finding this evidence ourselves. This is largely why we have peer review, even with its serious flaws – the reviewer acts in part as a debater, bringing up evidence or other considerations that the author hasn’t thought of, hopefully leading to a better paper. I hope that next year’s SNL has a good debate about an interesting topic. I also feel that the conference could do well to encourage junior researchers to debate, as there is nothing better for personal improvement in science than interacting with an opposing view to sharpen one’s knowledge and logical arguments. It might be helpful to establish ground rules for these debates, in order to ensure that they do not cross the line from debate to contentious argument.
5. Society for the Neurobiology of …
I have pretty much given up on hoping that the “Language” part of the Society for the Neurobiology of Language conference will live up to its moniker. This is not to say that SNL does not have a lot of fine quality research on the neurobiology of language – in fact, it has this in spades. What I mean is that there is little focus in the conference on integrating our work with people who spend their lives trying to figure out what language is: linguists and psycholinguists. I take great value in these fields, as language theory provides a very useful guide for my own research. I don’t always take the letter of language theory in detail, but rather as inspiration for the kinds of things one might find in the brain.
This year, there were some individual exceptions to this general rule of linguistic omission at the conference. I was pleased to see some posters and talks that incorporated language theory, particularly John Hale’s talk on syntax, computational modeling, and neuroimaging. He showed that anterior and posterior temporal lobe are good candidates for basic structural processes, but not the IFG – no surprise but good to see converging evidence (see Brennan et al., 2016 for details). But, my interest in Hale’s talk only highlighted the trend towards omission of language theory at SNL that can be well illustrated by looking at the keynote lectures and invited speakers at the conference over the years.
There are essentially three kinds of talks: (i) talks about the neurobiology of language, (ii) talks about (neuro)biology, and (iii) talks about non-language communication, cognition, or information processing. What’s missing? Language theory. Given that the whole point of our conference is about the nature of human language, one would think that this is an important topic to cover. Yet I don’t think there has ever been a keynote talk at SNL about psycholinguistics or linguistics. I love dolphins and birds and monkeys, but doesn’t it seem a bit strange that we hear more about basic properties of non-human animal communication than human language? Here’s the full list of keynote speakers at SNL for every conference in the past 9 years – not a single talk that is clearly about language theory (with the possible exception of Tomasello, although his talk was about very general properties of language with a lot of non-human primate data).
2009
Michael Petrides: Recent insights into the anatomical pathways for language
Charles Schroeder: Neuronal oscillations as instruments of brain operation and perception
Kate Watkins: What can brain imaging tell us about developmental disorders of speech and language?
Simon Fisher: Building bridges between genes, brains and language
2010
Karl Deisseroth: Optogenetics: Development and application
Daniel Margoliash: Evaluating the strengths and limitations of birdsong as a model for speech and language
2011
Troy Hackett: Primate auditory cortex: principles of organization and future directions
Katrin Amunts: Broca’s region -- architecture and novel organizational principles
2012
Barbara Finlay: Beyond columns and areas: developmental gradients and reorganization of the neocortex and their likely consequences for functional organization
Nikos Logothetis: In vivo connectivity: paramagnetic tracers, electrical stimulation & neural-event triggered fMRI
2013
Janet Werker: Initial biases and experiential influences on infant speech perception development
Terry Sejnowski: The dynamic brain
Robert Knight: Language viewed from direct cortical recordings
2014
Willem Levelt: Localism versus holism. The historical origins of studying language in the brain
Constance Scharff: Singing in the (b)rain
Pascal Fries: Brain rhythms for bottom-up and top-down signaling
Michael Tomasello: Communication without conventions
2015
Susan Goldin-Meadow: Gestures as a mechanism of change
Peter Strick: A tale of two primary motor areas: “old” and “new” M1
Marsel Mesulam: Revisiting Wernicke’s area
Marcus Raichle: The restless brain: how intrinsic activity organizes brain function
2016
Mairéad MacSweeney: Insights into the neurobiology of language processing from deafness and sign language
David Attwell: The energetic design of the brain
Anne-Lise Giraud: Modelling neuronal oscillations to understand language neurodevelopmental disorders
2017
Argye Hillis: Road blocks in brain maps: learning about language from lesions
Yoshua Bengio: Bridging the gap between brains, cognition and deep learning
Ghislaine Dehaene-Lambertz: The human infant brain: A neural architecture able to learn language
Edward Chang: Dissecting the functional representations of human speech cortex
I was at most of these talks; most of them were great, and at least entertaining. But it seems to me that the great advantage of keynote lectures is to learn about something outside of one’s field that is relevant to it, and it seems to me that both neurobiology AND language fit this description. This is particularly striking given the importance of theory to much of the scientific work I described in this post. And I can think of many linguists and psycholinguists who would give interesting and relevant talks, and who are also interested in neurobiology and want to chat with us. At the very least, they would be entertaining. Here are just some that I am thinking of off the top of my head: Norbert Hornstein, Fernanda Ferreira, Colin Phillips, Vic Ferreira, Andrea Moro, Ray Jackendoff, and Lyn Frazier. And if you disagree with their views on language, well, I’m sure they’d be happy to have a respectful debate with you.
All told, this was a great conference season, and I’m looking forward to what the future holds for the neurobiology of language. Please let me know your thoughts on these conferences, and what I missed. I look forward to seeing you at SNL 2018, in Quebec City!
-William
References
Berwick, R. C., & Chomsky, N. (2015). Why only us: Language and evolution. MIT press.
Berwick, R. C., Pietroski, P., Yankama, B., & Chomsky, N. (2011). Poverty of the stimulus revisited. Cognitive Science, 35(7), 1207-1242.
Berwick, R. C., Beckers, G. J., Okanoya, K., & Bolhuis, J. J. (2012). A bird’s eye view of human language evolution. Frontiers in evolutionary neuroscience, 4.
Brennan, J. R., Stabler, E. P., Van Wagenen, S. E., Luh, W. M., & Hale, J. T. (2016). Abstract linguistic structure correlates with temporal activity during naturalistic comprehension. Brain and language, 157, 81-94.
Chomsky, C. (1986). Analytic study of the Tadoma method: Language abilities of three deaf-blind subjects. Journal of Speech, Language, and Hearing Research, 29(3), 332-347.
Gentner, T. Q., Fenn, K. M., Margoliash, D., & Nusbaum, H. C. (2006). Recursive syntactic pattern learning by songbirds. Nature, 440(7088), 1204-1207.
Hall, M. L., Ferreira, V. S., & Mayberry, R. I. (2015). Syntactic Priming in American Sign Language. PloS one, 10(3), e0119611.
Hauser, M. D., Chomsky, N., & Fitch, W. T. (2002). The faculty of language: what is it, who has it, and how did it evolve?. science, 298(5598), 1569-1579.
Hickok, G., Bellugi, U., & Klima, E. S. (1996). The neurobiology of sign language and its implications for the neural basis of language. Nature, 381(6584), 699-702.
Hickok, G., & Poeppel, D. (2000). Towards a functional neuroanatomy of speech perception. Trends in cognitive sciences, 4(4), 131-138.
Leonard, M. K., Ramirez, N. F., Torres, C., Travis, K. E., Hatrak, M., Mayberry, R. I., & Halgren, E. (2012). Signed words in the congenitally deaf evoke typical late lexicosemantic responses with no early visual responses in left superior temporal cortex. Journal of Neuroscience, 32(28), 9700-9705.
Leroy, F., Cai, Q., Bogart, S. L., Dubois, J., Coulon, O., Monzalvo, K., ... & Lin, C. P. (2015). New human-specific brain landmark: the depth asymmetry of superior temporal sulcus. Proceedings of the National Academy of Sciences, 112(4), 1208-1213.
Matchin, W., Villwock, A., Roth, A., Ilkbasaran, D., Hatrak, M., Davenport, T., Halgren, E. &
Mayberry, M. (2017). The cortical organization of syntactic processing in American Sign Language: Evidence from a parametric manipulation of constituent structure in fMRI and MEG. Poster presented at the 9th annual meeting of the Society for the Neurobiology of Language.
Matchin, W., Hammerly, C., & Lau, E. (2017). The role of the IFG and pSTS in syntactic prediction: Evidence from a parametric study of hierarchical structure in fMRI. Cortex, 88, 106-123.
Monahan, P. J., & Idsardi, W. J. (2010). Auditory sensitivity to formant ratios: Toward an account of vowel normalisation. Language and cognitive processes, 25(6), 808-839.
Pallier, C., Devauchelle, A. D., & Dehaene, S. (2011). Cortical representation of the constituent structure of sentences. Proceedings of the National Academy of Sciences, 108(6), 2522-2527.
Parsons, T. (1990). Events in the Semantics of English (Vol. 5). Cambridge, Ma: MIT Press.
Petitto, L. A., & Marentette, P. F. (1991). Babbling in the manual mode: Evidence for the ontogeny of language. Science, 251(5000), 1493.
Rakic, P. (1988). Specification of cerebral cortical areas. Science, 241(4862), 170.
Sandler, W., & Lillo-Martin, D. (2006). Sign language and linguistic universals. Cambridge University Press.
Tang, C., Hamilton, L. S., & Chang, E. F. (2017). Intonational speech prosody encoding in the human auditory cortex. Science, 357(6353), 797-801.
Thursday, November 16, 2017
Is science broken/breaking?
Is science broken/breaking and if it is what broke/is
breaking it? This question has been asked and answered a lot lately. Here
is another recent contribution by Roy and Edwards (R&E). Their answer is
that it is nearly broken (or at least
severely injured) and that we should fix it by removing the perverse incentives
that currently drive it. Though I am sympathetic to ameliorating many of the
adverse forces R&E identify, I am more skeptical than they are that there
is much of a crisis out there. Indeed, to date, I have seen no evidence showing
that what we see today is appreciably worse than what we had before, either in
the distant past (you know in the days when science was the more or less
exclusive pursuit of whitish men of means) or even the more recent past (you
know when a PhD was still something that women could only dream about). I have
seen no evidence that show that published results were once more reliable than
they are now or that progress overall was swifter. Furthermore, from where I sit, things seem
(at least from the outside) to be going no worse than before in the older
“hard” sciences and in the “softer” sciences the problem is less the perverse
incentives and the slipshod data management that R&E point to so much as
the dearth of good ideas that can allow such inquiries to attain some
explanatory depth. So, though I agree that there are many perverse incentives
out there and that there are pressures that can (and often do) lead to bad
behavior, I am unsure whether given the scale of the modern scientific
enterprise things are really appreciably worse today than they were in some
prior golden age (not that I would object to more money being thrown at the
research problems I find interesting!). Let me ramble a bit on these themes.
What broke/is breaking science? R&E point to
hypercompetition among academic researchers. Whence the hypercompetition?
Largely from the fact that universities are operating “more like businesses”
(2). What in particular? (i) The squeezed labor market for academics (fewer
tenure track jobs and less pleasant work environment), (ii) The reliance on
quantitative performance metrics (numbers of papers, research dollars,
citations) and (iii) the fall in science research funding (from 2% of GDP in
1960 to .78% in 2014)[1]
(p. 7) work together to incentivize scientists to cut corners in various ways.
As R&E puts it:
The steady growth of perverse incentives,
and their instrumental role in faculty research, hiring and promotion
practices, amounts to a systematic dysfunction endangering scientific
integrity. There is growing evidence that today’s research publications too
frequently suffer from lack of replicability, rely on biased data-sets, apply
low or sub-standard statistical methods, fail to guard against researcher
biases, and overhype their findings. (p. 8)
So, perverse incentives due to heightened competition for
shrinking research dollars and academic positions leads scientists interested
in advancing their research and careers to conduct research “more vulnerable to
falsehoods.” More vulnerable than what/when? Well, by implication than some
earlier golden age when such incentives did not dominate and scientists pursued
knowledge in a more relaxed fashion and were not incentivized to cut corners as
they are today. [2]
I like some of this story. I believe as R&E argues that
scientific life is less pleasant than it used to be when I was younger (at least
for those that make it into an academic position). I also agree that the
pressures of professional advancement make it costly (especially to young
investigators) to undertake hard questions, ones that might resist solution
(R&E quotes Nobelist Roger
Kornberg as claiming: “If the work you propose to do isn’t virtually
certain of success, then it won’t be funded” (8)).[3]
Not surprisingly then, there is a tendency to concentrate on those questions
for whose solution techniques already available apply and that hard work and
concentrated effort can crack. I further agree that counting publications is,
at best, a crude way of evaluating scientific merit, even if buttressed by
citation counts (but see below). All of this seems right to me, and yet…I am
skeptical that science as a whole is really doing so badly or that there was a
golden age that our own is a degenerate version of. In fact, I suspect that
people who think this don’t know much about earlier periods (there really was
always lots of junk published and disseminated) or have an inflated view of the
cooperative predilections of our predecessors (though how anyone who read the
Double Helix might think this is beyond me).
But I have a somewhat larger problem with this story: if the
perverse incentives are as R&E describes them then we should witness its
ill effects all across the sciences
and not concentrated in a subset of them. In particular, it should affect the
hardcore areas (e.g. physics, chemistry, molecular biology) just as it does the
softer (more descriptive) domains of inquiry (social psychology, neuroscience).
But it is my impression that this is not
what we find. Rather we find the problem areas more concentrated, roughly in
those domains of inquiry where, to be blunt, we do not know that much about the
fundamental mechanisms at play. Put another way, the problem is not merely (or even largely) the bad
incentives. The problem is that we often cannot distinguish between those domains
that are sciency from those domains that are scientific. What’s the difference?
The latter have results (i.e. serious theory) that describe non-trivial aspects
of the basic mechanisms where the former has methods (i.e. ways of “correctly”
gathering and evaluating data) that are largely deployed to descriptive (vs
explanatory) ends. As Suppes said over 60 years ago: “It’s a paradox of
scientific method that the branches of empirical science that have the least
theoretical developments have the most sophisticated methods of evaluating
evidence.” It should not surprise that domains where insight is weakest are
also domains where shortcuts are most accessible.
And this is why it’s important to distinguish these domains.
If we look, it seems that the perverse incentives R&E identifies are most
apt to do damage in those domains where we know relatively little. Fake data,
non-replicability, bad statistical methods leading to forking paths/P-hacking, research
biases, these are all serious problems especially
in domains where nothing much is known. In domains with few insights and
where all we have is the data then screwing with the data (intentionally or
not) is the main source of abuse. And in those domains when incentives for
abuse rise, then enticements to make the data say what we need them to say
heightens. And when the techniques for managing the data are capable of being
manipulated to make them say what you want them to say (or at least whose proper
deployment eludes even the experts in the field (see here
and here),
then opportunity allows enticement to flower into abuse.
The problem then is not just
perverse incentives and hypercompetition (these general factors hold in the
mature sciences too) but the fact that in many fields the only bulwark against
scientific malpractice is personal integrity. What we are discovering is that
as a group, scientists are just as prone to pursuing self-interest and career
advancement as any other group. What makes scientists virtuous is not their
characters but their non-trivial knowledge. Good theory serves as a conceptual
break against shoddy methods. Strong priors (which is what theory provides) is
really important in preventing shoddy data and slipshod thinking from leading
the field astray. If this is right, then the problem is not with the general
sociological observation that the world is in many ways crappier than before,
but with the fact that many parts of what we call the sciences are pretty
immature. There is far less science out there than advertised if we measure a
science by the depth of its insights rather than the complexity of its
techniques (especially, its data management techniques).
There is, of course, a reason for why the term ‘science’ is
used to cover so much inquiry. The prestige factor. Being “scientific” endows
prestige, money, power and deference. Science stands behind “expertise,” and
expertise commands many other goodies. There is thus utility in inflating the domain
of “science” and this widens the possibilities for and advantages of the kinds
of problems that R&E catalogue.
R&E ends with a discussion of ways to fix things. They
seem like worthy, albeit modest fixes. But I personally doubt they will make
much of a difference. They include getting a better fix on the perverting
incentives, finding better ways to measure scientific contribution so that
reward can be tuned to these more accurate metrics, and implementing more
vigilant policing and punishment of malefactors. This might have an effect,
though I suspect it will be quite modest. Most of the suggestions revolve
around ways of short-circuiting data manipulation. That’s why I think these
suggestions will ultimately fail to do much. They misdiagnose the problem.
R&E implicitly takes the problem to mainly reside with current perverse incentives
to pollute the data stream for career advancement. The R&E solution amounts
to cleaning up the data stream by eliminating the incentives to dirty it. But
the problem is not only (or mainly) dirty data. The problem is our very modest
understanding for which yet more data is not a good substitute.
Let me end with a mention of another paper. The other paper
(here)
takes a historical view on metrics and how it affected research practice in the
past. It notes that the problems we identify as novel today have long been with
us. It notes that this is not the first time people are looking for some kind
of methodological or technological fix to slovenly practice. And that is the
problem; the idea that there is a kind of methodological “fix” available, a
dream for a more rigorous scientific method and a clearer scientific ethics.
But there is no such method (beyond the trivial do your best) and the idea that
scientists qua scientists are more
noble than others is farfetched. Science cannot be automated. Thinking is hard
and ideas, not just data collection, matter. Moreover, this kind of thinking
cannot be routinized and insights cannot be summoned not mater how useful they
would be. What the crisis R&E identifies points to, IMO, is that where we
don’t know much we can be easily misled and easily confused. I doubt that there
is a methodological or institutional fix for this.
[1]
It is worth pointing out that real GDP in 2016 is over five times higher than
it was in 1960 (roughly 3 trillion vs 17 trillion) (see here). In real
terms then, there is a lot more money today than there was then for science
research. Though the denominator went down, the numerator really shot up.
[2]
Again, what is odd is the dearth of comparative data on these measures. Are
findings less replicable today than in the past? Are data sets more biased than
before? Was statistical practice better in the past? I confess that it is hard
to believe that any of these measures have gotten worse if compared using the
same yardsticks.
[3]
This is odd and I’ve complained about this myself. However, it is also true
that in the good old days science was a far more restricted option for most
people (it is far more open today and many more people and kinds of people can
become scientists). However, what seems more or less right is that immediately
after the war there was a lot of government money pouring into science and that
made it possible to pursue research that did not show immediate signs of
success. What would be nice to see is evidence that this made for better more
interesting science, rather than more comfortable scientists.
Tuesday, November 7, 2017
Minimal pairs
As any well educated GGer knows, there is a big and
important difference between grammaticality and acceptability (see here
and here)
(don’t be confused by the incessant attempts by many (especially psycho types)
to confuse these very separate notions (some still call judgment tasks ‘grammaticality judgments’ (sheesh!!))). The
latter pertains to native speaker intuitions, the former to GGers theoretical
proposals. It is a surprising and very useful fact that native speaker’s have
relatively stable converging judgments about the acceptability (under an
interpretation) of linguistic forms over a pretty wide domain of linguistic
stimuli. This need not have been the case, but it is. Moreover, this capacity
to discriminate among different linguistic examples and to comparatively rate
them consistently over a large domain has proven to be a very good probe into
the (invisible underlying) G structure that GGers have postulated is involved
in linguistic competence. So for lots of GG research (the bulk of it I would
estimate) the road to grammaticality has been paved by acceptability. As I’ve
mentioned before (and will do so again here), we should be quite surprised that
a crude question like “how does this sound (with this meaning)?” has been able
to yield so much. IMO, it strongly suggests that FL is a (relatively) modular
system (and hence immune to standard kinds of interference effects) and FL is a
central cognitive component of human mental life (which is why its outputs have
robust behavioral effects). At any rate,
acceptability’s nice properties makes life relatively easy for GGers like me as
it allow me/us to wallow in experimental crudity without paying too high an empirical
price.[1]
That is the good news. Now for some bad. The fact that
acceptability judgments are fast and easy does not mean that they can be
treated cavalierly. Not all acceptability judgments are equally useful. The
good ones control for the non-grammatical factors that we all know affect acceptability. The good ones general exploit
minimal pairs to control for these distorting non-grammatical factors. Sadly, one
problem with lots of work in syntax is its lack of fastidiousness concerning
minimal pairs. Let’s consider for a moment why this is a problem.
If acceptability is our main empirical probe into
grammaticality and it is understood
that acceptability is multivariate with grammaticality being but one factor
among many contributing to acceptability, then to isolate what the grammar
contributes to an acceptability judgment requires controlling for all
acceptability effects that are not grammatically induced. So, the key factor
behind the acceptability judgment methodology is to bend over backwards to
segregate those factors that we all know can affect acceptability but cannot be
traced to grammaticality. And it is the practicing GGer that needs to worry
about the controls because speakers cannot be trusted to do so as they have no
special conscious insight into their grammatical
knowledge (they cannot tell us reliably why
something sounds unacceptable and whether that is because their G treats it as
ungrammatical).[2]
And that is where minimal pairs come in. They efficiently function to control
for non-grammatical factors like length, lexical frequency, pragmatic
appropriateness, semantic coherence, etc.
Or, to put this another way: to the degree that I can use largely the same
lexical items, in largely the same order to that degree I can control for
features other than structural difference and thereby focus on G distinctions
as the source for whatever acceptability differences I observe. This is what
good minimal pairs do and so this is what makes minimal pairs the required
currency of grammatical commerce. Thus, when they are absent suspicion is
warranted, and best practice would encourage their constant use. In what follows I would like to illustrate
what I have in mind by considering a relatively hot issue nowadays; the
grammatical status of Island Effects (IE) and how minimal pairs correctly
deployed, render a lot of the argument against the grammatical nature of island
effects largely irrelevant. I will return to this theme at the end.
To get started, let’s consider an early example from Chomsky
(1964: Current Issues). He observes
that (1) is three ways ambiguous. It has the three paraphrases in (2).
1. John
watched a woman walking to Grand Central Station (GCS)
2. a.
John watched a woman while he was walking to GCS
b. John watched a woman that was
walking to GCS
c. John watched a woman walk to
GCS
The ambiguities reflect structural differences that the same
sequence of words can have. In (2a), walking
to GCS is a gerundive adjunct and John
is the controller of the subject PRO.[3]
In (2b) a woman walking to GCS is a
reduced relative clause with walking to
GCS an adjunct modifying the head woman.
In contrast to the first reading, a woman
walking to GCS forms a nominal constituent. In the third reading a woman walking to GCS is a gerundive
clausal complement of watch depicting
an event. It is thematically similar to, but aspectually different from, the
naked infinitive small clause provided in (2c). Thus, the three way ambiguity witnessed
in (1) is the product of three different syntactic configurations that this
string of words can realize and that is made evident in the paraphrases in (2).
Chomsky further notes that if we WH move the object of to
(optionally pied piping the preposition) all but the third reading disappears:
3. a.
Which train station did John watch a woman walking to
b.
To which train station did John watch a woman walking
Given what we know about islands and movement, this should
not be surprising. Temporal adjuncts resist WH extraction (CED effects), as do
relative clauses (CNPC). Clausal complements do not. Thus, we predict that
movement of (to)which train station
from (1) with structures analogous to (2a,b) should be illicit, while movement
from (1) with a complement structure like (2c) should be fine. Thus, we expect
the movement to factor out all but one of the readings we find with (1). And
this is what occurs.
Note that this explanation of the loss of all but one
reading coincides with the fact that all but the third paraphrase in (2)
resists WH extraction:
4. a.
*(To) which train station did John watch a woman while he was walking (to)
b. *(To) which train station did John watch a woman who was walking (to)
c.
(To) which train station did John watch
a woman walk (to)
Thus the reason that (3) becomes monoguous under WH movement is the same reason that
(4a,b) are far more unacceptable than (4c).
This argues for the fact that unacceptability wrt these sentences
((un)acceptability under an interpretation for (1) and tout court with (4))
implicates a syntactic source precisely because other plausible factors are
controlled for, and they are controlled for because we have used the same
words, in the same order thereby varying only the grammatical structures that
they realize.[4]
We can go a little further, IMO. Note the dependent measure
in (4) is relative acceptability with (4c) as baseline. But, note that in this
case the items compared are not identical. The fact that we get the same effects in (1)/(3) as we do in
(2)/(4) argues that the data in (4) reflects structural differences and not the extraneous vocabulary items that
differ among the examples. Furthermore,
the absence of the two illicit readings in (3) is quite clear. It is often
asserted that acceptability judgments are murky and can be trivially enhanced/degraded
by changing the WHs moved or the intervening lexical items. Perhaps. Here we
have a case where the facts strike me as particularly clear. Only the event reading
survives the extraction. The other ones disappear, which is exactly what a
standard theory of islands would predict. This, I believe, is typical for well
constructed minimal pair cases: the dependent measure will often be the availability
of a reading and, interestingly, the presence/absence of a reading is often
more perspicuous for native speakers than is a more direct relatively acceptability
judgment.
I would like to consider one more case for illustration.
This involves near minimal pairs
rather than identical strings. What the above Chomsky case provides evidence
for (rather clear evidence IMO) is that G structure matters for extraction. It
shows this by factoring out everything but such structure as the relevant
variable. However, it does not factor out one important variable: meaning.
Sentence (1) has three readings in virtue of having three different syntactic
structures. So, the argument cannot single out whether the relevant factor is syntactic
or semantic. Does the difference under WH
movement reflect the effects of formal grammatical structure (syntax) or of
meaning (semantics)? As the two vary together in these cases, it is impossible
to pull them apart. What we need to focus in on this are structures that are
semantically and formally the same. And this is very hard to do. However, not
quite impossible. Let me discuss a (near) minimal pair involving event
complements.[5]
Consider the following two sets of sentences:
5. a.
Mary heard the sneaky burglar clumsily attempt to open the door
b. Mary heard the sneaky burglar’s clumsy attempt to open the door
c. What1 did Mary hear the sneaky burglar clumsily attempt to
open t1
d.
What1 did Mary hear the sneaky burglar’s clumsy attempt to open t1
6. a.
Mary heard someone clumsily attempt to open the door
b. Mary heard a clumsy attempt to open the door
c. What1 did Mary hear someone clumsily attempt to open t1
d.
What1 did Mary hear a clumsy attempt to open t1
The main difference between (5) and (6) is that the latter
tries to control for definiteness effects in nominals. What is relevant here is
that both sets of cases distinguish the acceptability of the the c from the d
cases with the former being judged better than the latter using standard
Sprouse like techniques (i.e. we find a super additivity effect for (5c)/(6c)).
Why is this interesting?
Well note that the near minimal pairs have a common
semantics. Perception verbs take eventive internal arguments. These can come in
either a clausal ((5a,c)/(6a,c)) or a nominal ((5b,d)/(6b,d)) flavor. The
latter should show island effects under movement given standard subjacency
reasoning. In sum, these examples control for semantic effects by identifying
them across the two syntactic structures yet we still find the super-additivity
signature characteristic of islands. This argues for a syntactic (rather than a
semantic) conception of islands for this is the one factor we varied in these
near minimal pairs, the meaning having been held constant across the a/b and
c/d examples.
Howard Lasnik is constantly reminding those around him how
important minimal pairs are in constructing a decent grammatical argument. He
notes this because it is not yet second nature for GGers to employ them. And he
is right to insist that we do so for the reasons outlined above. It allows us
to make our arguments cleaner and to control for plausible interfering factors.
Minimal pairs is the nod we give to the fact that acceptability judgments are
little experiments with all the confounds that experiments bring with them.
Minimal pairs is the price we pay for using acceptability judgments to probe
grammatical structure. As Chomsky noted long ago in Syntactic Structures these sorts of judgments can really get you
deep into a G structure very efficiently. They are an indispensible part of
linguistic theorizing. However, to do their job well, we must understand their
logic. We must understand that theories of grammar are not theories of acceptability and that there is a gap between
acceptability (a term of art for describing data) and grammaticality (a term of
art for describing the products of generative procedures). Happily the gap can
be bridged and acceptability can be fruitfully used. But jumping that gap means
controlling for extraneous factors that impact acceptability. And that is how
minimal pairs are critical. Deployed well they allow us to control the hell out
of the data and zero in the grammatical factors of linguistic interest. So,
let’s hear it for minimal pairs and let’s all promise to use them in all of our
papers and presentation from now on. Pledges to do so can be sent to me written
on a five dollar bill c/o the ling dept at UMD.
[1]
Jon Sprouse and friends have shown roughly this: that crude methods are fine as
they converge with more careful ones.
[2]
If undergrads are to be believed virtually all unacceptability stems from
semantic ill-formedness. If asked why some form sounds off you can bet dollars
to doughnuts that an undergrad will insist that it doesn’t mean anything, even
when telling you what it in fact means.
[3]
Which, you all know, does not exist but is actually a copy/occurrence of John due to sidewards internal merge.
And yes, this is an unpaid political announcement.
[4]
Note the use of ‘grammatical’ rather than ‘syntactic.’ These cases implicate
structure but as syntactic structure and semantic interpretation co-vary we
cannot isolate one or the other as the relevant causal element. We return to
this with the second example of a minimal pair below.