A few weeks ago Viola Kozak successfully defended her doctoral dissertation at Gallaudet University. (Congratulations Viola!) It was a great experience to be on the committee, and some of Viola's findings are relevant to some of the discussions that we have been having on the blog about the nature of phonological representations. Here's a quote from the defense announcement:
The purpose of Ms. Kozak’s study was to analyze the English and American Sign Language phonological processing skills of two groups of bimodal bilingual children: hearing children of Deaf adults (Kodas) and Deaf children with cochlear implants from Deaf signing families (DDCI). Additionally, this study compared the performance of these bimodal bilingual children to that recorded in previous studies of deaf children from hearing families with cochlear implants (DHCI), who have, as a group, been found to perform less accurately on English phonological tests than their hearing counterparts. The study investigated whether or not this was the case with Bimodal Bilinguals. Overall, the two groups of bimodal bilingual children scored comparatively on all tests, and the findings suggest that early exposure to ASL from birth may serve to bolster a cochlear implant user’s spoken language acquisition following implantation.There's a bit to unpack here, so let's take the hearing children first. There were two groups of these, hearing children of Deaf adults (Kodas, HD, n=17) and hearing children of hearing adults (HH, n=4). Phonologically-oriented tests for the participants included phonemic awareness, phonemic discrimination and pseudoword repetition. These two groups scored comparably on these tests, showing that they are both learning their phonologies "on schedule".
Now let's consider the children with cochlear implants. The group in this study were Deaf children of Deaf adults (DDCI, n=3). Previous studies have looked at Deaf children of hearing adults who had received cochlear implants (DHCI) and found that their phonological development was relatively delayed. But in Viola's study, the DDCI group performed on par with the age-matched set of Kodas (who were not different from the HH group). Here is one plot of the non-word repetition test (probably the hardest task).
The blue dots are vertically scattered in the middle among the red squares, with the lower two blue dots performing similarly to hearing children of hearing adults (HH) who were either slightly younger or slightly older. Now, this is admittedly a small number of participants, but there is not that large a population of Deaf children of Deaf adults who have received cochlear implants to draw on. Previous studies (for example) have shown poorer performance for CI users on phonological tasks, but these were CI users who were Deaf children of hearing adults (DHCI). So it's possible that one source of the delayed development for DHCI children is the lack of early language input, as they are receiving neither spoken language input nor sign language input (because their parents are not signers). If this is on the right track, then the good performance of the DDCI group studied by Viola (the blue dots) might then be due to the fact that they did receive ASL language input prior to receiving their cochlear implants (and they also continued to use ASL after implantation). And then this difference would imply that prior experience with ASL phonology aids in the subsequent acquisition of spoken (English) phonology, across different modalities. And this facilitation effect would in turn support that idea that some of the representational and computational apparatus is common to both ASL and spoken phonology. All of this doesn't make phonology wholly substance-free, but it does argue for the existence of abstract, non-substantive (amodal) components of phonology, something that the SFP program strongly endorses, and that I strongly agree with.