I just read a fascinating paper and excellent comment thereupon in Nature Neuroscience (thx to Pierre Pica for sending them along) (here and here). The papers make the interesting point that, as I will argue below, illuminate two very different views of what structure is and where it comes from. The two views have names that should now be familiar to you: Rationalism (R) and Empiricism (E). What is interesting about the two papers discussed below is that they indicate that R and E are contrasting philosophical conceptions that have important empirical consequences for very concrete research. In other words, R and E are philosophical in the best sense, leading to different conceptions with testable empirical (though not Empiricist) consequences. Or, to put this another way, R and E are, broadly speaking, research programs pointing to different conceptions of what structure is and how it arises.
Before getting into this more contentious larger theme, let’s review the basic findings. The main paper is written by the conglomerate of Saygin, Osher, Norton, Youssoufian, Beach, Feather, Gaab, Gabrielli and Kanwisher (Henceforth Saygin et al). The comment is written by Dehaene and Dehaene-Lambertz (DDL). The principle finding is, as the title makes admirably clear, that “connectivity precedes function in the development of the visual word form area.” What’s this mean?
Saygin et al observes that the brain is divided up into different functional regions and that these are “found in approximately the same anatomical location in virtually every normal adult”. The question is how this organization arises: “how does a particular cortical location become earmarked”? (Saygin et al:1250). There are two possibilities: (i) the connectivity follows the function or (ii) the function follows the connectivity. Let’s expand a bit.
(i) is the idea that in virtue of what a region of brain does it wires up with another region of brain because of what it does at roughly the same time. This is roughly the Hebbian idea that regions that fire together wire together (FTWT). So, a region that is sensitive to certain kinds of visual features (e.g. Visual Word From Area (VFWA)) hooks up with an area where “language processing is often found” (DDL:1193) to deliver a system that undergirds reading (coding a dependency between “sounds” and “letters”/”words”).
(ii) reverses the causal flow. Rather than intrinsic functional properties of the different regions driving their connectivity (via concurrent firing), the extrinsic connectivity patterns of the regions drives their functional differentiation. To coin a phrase: areas that are wired together fire together (WTFT). This is what Saygin et al finds :
This tight relationship between function and connectivity across the cortex suggests a developmental hypothesis: patterns of extrinsic connectivity (or connectivity fingerprints) may arise early in development, instructing subsequent functional development.
The may is redeemed to a does by following young kids before and after they learn to read. As DDL summarizes it (1193):
To genuinely test the hypothesis that the VWFA owes its specialization to a pre-existing connectivity pattern, it was necessary to measure brain connectivity in children before they learned to read. This is what Saygin et al. now report. They acquired diffusion-weighted images in children around the age of 5 and used them to reconstruct the approximate trajectory of anatomical fiber tracts in their brain. For every voxel in the ventral visual cortex, they obtained a signature profile of its quantitative connectivity with 81 other brain regions. They then examined whether a machine-learning algorithm could be trained to predict, from this connectivity profile, whether or not a voxel would become selective to written words 3 years later, once the children had become literate. Finally, they tested their algorithm on a child whose data had not been used for training. And it worked: prior connectivity predicted subsequent function (my bold, NH). Although many children did not yet have a VWFA at the age of 5, the connections that were already in place could be used to anticipate where the VWFA would appear once they learned to read.
I’ve bolded the conclusion: WTFT and not FTWT. What makes the Saygin et al results particularly interesting is their precision. Saygin et al is able to predict the “precise location of the VWFA” in each kid based on “the connectivity of this region even before the functional specialization for orthography in the VWFA exists” (1254). So voxels that are not sensitive to words and letters before kids learn to read, become so in virtue of prior (non functionally based) connections to language regions.
Some remarks before getting into the philosophical issues.
First, getting to this result requires lots of work, both neuro imaging work and good behavioral work. This paper is a nice model for how the two can be integrated to provide a really big and juicy result.
Second, this appeares in a really fancy journal (Nature Neurosceince) and one can hope that it will help set a standard for good cog-neuro work, work that emphasizes both the cognition and the neuroscience. Saygin et al does a lot of good cog work to show that in non-readers VWFA is not differentially sensitive to letters/words even though it comes to be so sensitive after kids have learned to read.
Third, DDL points out (1192-3) that whatever VWFA is sensitive to it is not simply visual features (i.e. a bias for certain kinds of letter like shapes). Why not? Because (i) the region is sensitive to letters and not numerals despite letters and numerals being formed using the same basic shapes and (ii) VWFA is located in the same place in blind subjects and non-blind ones so long as the blond ones can read braille or letters converted into “synthetic spatiotemporal sound patterns.” As DDL cooly puts it:
This finding seems to rule out any explanation based on visual features: the so-called ‘visual’ cortex must, in fact, possess abstract properties that make it appropriate to recognize the ‘shapes’ of letters, numbers or other objects regardless of input modality.
So, it appears that what VWFA takes as a “shape” is itself influenced by what the language area would deem shapely. It’s not just two perceptual domains with their independently specifiable features getting in sync, for what even counts as a shape depends on what an area is wired to. VWFA treats a “shape” as letter-like if it tags a “shape” that is languagy.
Ok, finally time for the sermon: at the broadest level E and R differ in their views of where structure comes from and its relation to function.
For Es, function is the causal driver and structure subserves it. Want to understand the properties of language, look at its communicative function. Want to understand animal genomes, look at the evolutionarily successful phenotypic expressions of these genomes. Want to understand brain architecture, look at how regions function in response to external stimuli and apply Hebbian FTWT algorithms. For Es, structure follows function. Indeed, structure just is a convenient summary of functionally useful episodes. Cognitive structures reflect shaping effects of the environmental inputs of value to the relevant mind. Laws of nature are just summaries of what natural objects “do.” Brain architectures are reflections of how sensory sensitive brain regions wire up when concurrently activated. Structure is a summary of what happens. In short, form follows (useful) function.
Rs beg to differ. Es understand structure as a precondition of function. It doesn’t follow function but precedes it (put Descartes before the functional horse!). Function is causally constrained by form, which is causally prior. For Rs, the laws of nature reflect the underlying structure of an invisible real substrate. Mental organization causally enables various kinds of cognitive activity. Linguistic competence (the structure of FL/UG and the structure of individual Gs) allows for linguistic performances, including communication and language acquisition. Genomic structure channels selection. In other words, function follows form. The latter is causally prior. Structure “instructs” (Saygin et al’sterm) subsequent functional development.
For a very long time, the neurosciences have been in an Empiricist grip. Saygin et al provides a strong argument that the E vision has things exactly backwards and that the Hebbian Esih connectionist conception is likely the wrong way of understanding the neural and functional structure of the brain. Brains come with a lot of extrinsic structure and this structure casually determines how it organizes itself functionally. Moreover, at least in the case of the VWFA, Darwinian selection pressures (another kind of functional “cause”) will not explain the underlying connectivity. Why not? Because as DDL notes (1192) alphabets are around 3800 years old and “those times are far too short for Darwinian evolution to have shaped our genome for reading.” That means that Saygin et al’s results will have no “deeper” functional explanations, at least as concerns the VWFA. Nope, it’s functionally inexplicable structure all the very bottom. Connectivity is the causal key. Function follows. Saygin et al speculate that what is right for VWFA will hold for brain organization more generally. Is the speculation correct? Dunno. But being a card carrying R you know where I’d lay my bets.
 This seems like a pretty big deal to me and argues against any simple minded view of brain plasticity, I would imagine. Maybe any part the brain can perform any possible computation, but the fact that brains regularly organize themselves in pretty much the same way seems to indicate that this organization is not entirely haphazard and that there is method behind it. So, if it is true that the brain is perfectly plastic (which I really don’t believe) then this suggested that it is not the computational differences responsible for its large scale functional architecture. Saygin et al suggest another causal mechanism.
The Structures of Letters and Symbols throughout Human History Are Selected to Match Those Found in Objects in Natural Scenes, The American Naturalist, May 2006, vol. 167, no. 5ReplyDelete
Mark A. Changizi,1,* Qiong Zhang,2,† Hao Ye,2,‡ and Shinsuke Shimojo3,§
1. Sloan-Swartz Center for Theoretical Neurobiology, California Institute of Technology, Pasadena, California 91125;
2. Division of Biology, Computation and Neural Systems, California Institute of Technology, Pasadena, California 91125;
3. Japan Science and Technology Program–Exploratory Research for Advanced Technology Shimojo Implicit Brain Function Project, California Institute of Technology, Pasadena, California 91124
Submitted April 11, 2005; Accepted December 19, 2005; Electronically published March 23, 2006
Abstract: Are there empirical regularities in the shapes of letters and other human visual signs, and if so, what are the selection pres- sures underlying these regularities? To examine this, we determined a wide variety of topologically distinct contour configurations and examined the relative frequency of these configuration types across writing systems, Chinese writing, and nonlinguistic symbols. Our first result is that these three classes of human visual sign possess a similar signature in their configuration distribution, suggesting that there are underlying principles governing the shapes of human visual signs. Second, we provide evidence that the shapes of visual signs are selected to be easily seen at the expense of the motor system. Finally, we provide evidence to support an ecological hypothesis that visual signs have been culturally selected to match the kinds of con- glomeration of contours found in natural scenes because that is what we have evolved to be good at visually processing.
Keywords: natural scenes, letter shape, visual signs, object junctions, ecological vision, evolution of writing.
Blind People Do Math In The Visual CortexReplyDelete
September 21, 2016
People blind from birth appear to do math in a part of the brain typically devoted to vision, a new study has found. Researchers using functional MRI watched the visual cortex in the brains of congenitally blind people as they solved algebra problems in their heads.
The visual cortex didn’t merely respond, the researchers say. The more complicated the math, the greater the activity they saw in the vision center.
The same did not happen in the brains of sighted people with masks covering their eyes who did the same math exercises as the blind subjects.
It has been thought that brain regions, including the visual cortex, have entrenched functions that can change slightly, but not fundamentally, says Marina Bedny, an assistant professor of psychological and brain sciences at Johns Hopkins University and a coauthor of the study.
The new findings support more recent research showing just the opposite may be true: The visual cortex is extremely plastic and, when it has no visual signals to process, can respond to everything from spoken language to math problems.
“If we can make the visual cortex do math,” Bedny says, “in principle, we can make any part of the brain do anything.”
The brain as a whole may be extremely adaptable, Bedny says, almost like a computer that, depending on what data are coming in, can reconfigure to handle almost limitless types of tasks. It could someday be possible to reroute functions from a damaged area to a new spot in the brain, Bedny says.