A Hilbert Problem for Phonology1A foundational pillar of generative phonology is that the best explanation of the systematic pronunciation of morphemes is to posit a single underlying representation for each morpheme and to derive the surface pronunciations with a context-sensitive transformation. This pillar is present and obvious in the founding literature on generative phonology, and remains part of Optimality Theory today. I will adopt this hypothesis as a premise, but readers may be interested in arguments in favor of it. They can be found in most textbooks on generative phonology, though sometimes it takes a little digging. The first part of chapter 6 of (13) is one good place.
Accepting the above hypothesis means generative phonologists must answer the following three questions.
- What is the nature of the abstract, underlying, lexical representations?
- What is those nature of the concrete, surface representations?
- What is the nature of the transformation from underlying forms to surface forms?
Once an underlying representation of a phonetic form is posited, which can be distinct from one of its surface representations, an obvious question poses itself:
- How distinct can underlying forms be from surface forms?
This question has not attracted much attention in recent work in generative phonology, but in my opinion, the question is very much alive today and lurks under the surface in many of the modern debates, such as the extent to which phonology can be reduced to physiological principles governing articulation and perception (18,19,5,9,8,1).
The question above is not the exact question I wish to pose as the ``Hilbert'' question. Below I will provide what I think the question for abstractness should be for the 21st century, but first I want to discuss some of the history, which will help motivate the form the question will ultimately take.
The question of abstractness has persisted since the founding of generative grammar in the 1950s and 1960s. The landmark text SPE (2) which presented a coherent vision and culmination of work in the preceding decade was criticized for an excessive reliance on abstract underlying representations. Kisparsky's influential ``How abstract is phonology?'' (14) argued that the answer is ``not too much''and argued for strong conditions on the allowable disparities between underlying and surface forms. Hyman's response ``How concrete is phonology?'' (10) argued that the evidence (from Nupe) did not support Kisparsky's conditions and concluded that the answer to his own question was ``not necessarily all that much.''
A string of work in the 1970s examined the question more closely, notably the first chapter of Kenstowicz and Kisseberth's 1977 monograph (12), which is partly recapitulated in their 1979 textbook (13). Their conclusion appears to be that there must be some way to constrain the disparities, but well-motivated cases of absolute neutralization appear to defy every hypothesis they consider.
Absolute neutralization refers to ``abstract phonemes'' which are present underlying but never surface as such. Evidence has been offered for such abstract phonemes in a number of languages, including Sanskrit (14), Nupe (10), Dida (11)q, Okpe (6), Yokuts (13), Polish (7), Maltese Arabic (17), and others. The form of the evidence is usually one where a particular speech sound appears to exhibit two distinctive forms of behavior, one expected and one unexpected. The unexpected behavior makes sense if underlyingly the speech sound is a distinct abstract phoneme, which then `absolutely' neutralizes with another phoneme in the surface form. The more pieces of evidence of this sort that exist in a language for an abstract phoneme, the more compelling the case for it. Alternative explanations for the schizophrenic behavior a speech sound exhibits often come downs to some kind of lexical marking of exceptions (pace Kiparsky), which most phonologists agree is necessary to some extent anyway, at least to handle truly exceptional forms. One way to pose the question of abstractness then is how much evidence is enough for the abstract analysis to be preferred to the one which admits lexical exceptions? (Cf. Legate and Yang's Tolerance Principle (15).)
Not much concrete progress has been made with respect to abstractness in the past 40 years, but the question remains alive today. Odden, in his (2005) (17) textbook, vigorously defends absolute generalization provided the analyses are well-motivated. On the other hand, Hayes, in his (2009) (7) textbook, is more cautious. He presents the possibility of such an analysis, but suggests a more careful look at the data may indicate that treating aberrant behavior as exceptions may be more appropriate.
With this background in mind, I would like to pose the question of abstractness this way:
- What principles determine whether an abstract analysis of the observable facts is warranted or not?
Here are three reasons why I think the 21st century is a good time for phonologists to focus on the abstractness question.
One, recent work on contrast by phonologists in different traditions have made some surprising points of contact. Elan Dresher's work on the contrastive hierarchy (3) and Bruce Tesar's work on output-driven maps (20) both reveal how abstract underlying representations can fall out from featural contrast. This relates to Odden's observation that we are unwise to unconditionally fear absolute neutralization. He writes (17) ``Although the specific [abstract] segment ... is not pronounced as such in the language, concern over the fact that pronunciation do not include that particular segment would be misguided from the generative perspective, which holds that language sounds are defined in terms of features and the primary unit of representation is the feature, not the segment.'' Together, this work suggests formal principles that abstract phonemes may be licensed just in case the features that define them have already been established as contrastive (due to other aspects of the phonology).
Two, there is a better handle on model comparison now than in the past. Determining which of two distinct models is better is hard, and has been a difficult subject in the philosophy of science as well as in many disciplines grappling with modeling all sorts of phenomena. Issues here are ones familiar from the Minimum Description Length, Kolmogorov complexity, and Bayesian communities: How is the size of a grammar measured; How do we score how well a grammar accounts for the data; Why would some grammars be deemed shorter than others (the issue of priors)? Armed with important prior assumptions about how grammars and data are encoded, these theories can provide principles which should be able to tell us when lexical markings or abstract phonemes make for better analyses.
Three, I like to think that work in computational phonology also has something to lend to this debate. Two old theorems I have been studying recently say a lot about the nature of the (string) representations when describing constraints or transformations from underlying to surface forms. The first, by Medvedev (16), says that every regular language is the homomorphic image of strictly 2-local language. In layman's terms this means every regular language, which can be used to model complex kinds of constraints, can be derived from a strictly 2-local language, which is a very simple language. It suggests that what looks complex is actually very simple. But the trick is that the SL2 language has a bigger alphabet and the latent information hidden in the more complex regular language is made explicit in the SL2 language. Thus, by making our 'alphabet' more abstract we simplify constraints we may want to state. But the price is that the alphabet no longer represents observables (so one consequence is learning remains as difficult as before). The second theorem is about string to string transformations. It is well known that deterministic regular functions are less expressive than non-deterministic functions. Elgot and Mezei (4) show a deep connection between non-determinism and abstractness: Basically any non-deterministic function can be described as the composition of two deterministic functions provided the intermediate form is allowed to make use of symbols not in the input alphabet. The fact that the `intermediate' alphabet contains symbols not in the input alphabet introduces a degree of abstractness (the extra symbols represent abstract information). Finally, while these theorems are stated in terms of strings, recent work on model-theoretic approaches to phonology suggest that analogues exist when the representations are some data structure other than strings.
The problem of abstractness is a live one in phonology, but it has not received as much attention recently as it deserves. I like to think phonologists at the dawn of the the 22nd century will have a much better handle on the issue than we do now, in part because phonologists alive today, even if they are rolling around in diapers, will have made some real progress in understanding it.
Happy new year!
- Juliette Blevins.
Cambridge University Press, 2004.
- Noam Chomsky and Morris Halle.
The Sound Pattern of English.
New York: Harper & Row, 1968.
- Elan Dresher.
The Contrastive Hierarchy in Phonology.
Cambridge University Press, 2009.
- C. C. Elgot and J. E. Mezei.
On relations defined by generalized finite automata.
IBM Journal of Research and Development, 9(1):47-68, 1965.
- Mark Hale and Charles Reiss.
Substance abuse and dysfunctionalism: Current trends in phonology.
Linguistic Inquiry, 31:157-169, 2000.
- Morris Halle and G. N. Clements.
Problem Book in Phonology.
Cambridge, MA: MIT Press, 1983.
- Bruce Hayes.
- Bruce Hayes, Robert Kirchner, and Donca Steriade, editors.
Cambridge University Press, 2004.
- Jeffrey Heinz and William Idsardi.
What complexity differences reveal about domains in language.
Topics in Cognitive Science, 5(1):111-131, 2013.
- Larry Hyman.
How concrete is phonology?
Language, 46(1):58-76, 1970.
- Jonathan Kaye.
The mystery of the tenth vowel.
Journal of Linguistic research, 1:1-14, 1980.
- Michael Kenstowicz and Charles Kisseberth.
Topics in Phonological Theory.
Academic Press, New York, 1977.
- Michael Kenstowicz and Charles Kisseberth.
Academic Press, Inc., 1979.
- Paul Kiparsky.
Abstractness, opacity and global rules.
In O. Fujimura, editor, Three Dimensions of Linguistic Theory, pages 57-86. Tokyo: TEC, 1973.
Part 2 of â€œPhonological representationsâ€.
- Julie Ann Legate and Charles Yang.
Assessing child and adult grammar.
In Robert Berwick and Massimo Piattelli-Palmarini, editors, Rich Languages from Poor Inputs. Oxford University Press, Oxford, 2012.
- Yu. T. Medvedev.
On the class of events representable in a finite automaton.
In Edward F. Moore, editor, Sequential Machines; Selected Papers, pages 215-227. Addison-Wesley, 1964.
Originally published in Russian in Avtomaty, 1956, 385-401.
- David Odden.
Cambridge University Press, 2005.
- J.J. Ohala.
The listener as a source of sound change.
In C.S. Masek, R.A. Hendrik, and M.F. Miller, editors, Papers from the parasession on language and behavior: Chicago Linguistics Society, pages 178-203. 1981.
- John Ohala.
The relation between phonetics and phonology.
In William J. Hardcastle and John Laver, editors, Tha Handbook of Phonetic Sciences, pages 674-694. Blackwell Publishers, 1997.
- Bruce Tesar.
Cambridge University Press, 2014.
This document was generated using the LaTeX2HTML translator Version 2008 (1.71)
Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.
The command line arguments were:
latex2html -split 0 hilbert-phon-problem.tex
The translation was initiated by heinz on 2014-12-26
- ... Phonology1
- I would like to thank the participants of my Fall 2014 seminar in phonology ``Abstractness and Harmony in Phonology'' for the many excellent discusssions and for the work they have undertaken in better understanding arguments for and against how abstract phonology can/should be. They are Adam Breiner, Nicole Demers, Hyun Jin Hwangbo, Adam Jardine, Young-Eun Kim, Huan Luo, Kaylin Matocha, Taylor Miller, Hyun Jin Park, Curt Sebastian, and Kristina Strother-Garcia.