For quite a while now, I’ve been wondering why natural
language (NL) has so much morphology. In
fact, if one thinks about morphology with a minimalist mind set one gets to
feeling a little like I.
I. Rabi did regarding the discovery of the muon. His reaction? “Who
ordered that?”. So too with
morphology; what’s it doing and why is there both so much of it in some NLs
(Amer-Indian languages) and so little of it in others (Chinese, English)? One
thing seems certain, look around and you can hardly miss the fact that this is
a characteristic feature of NLs in
spades!!
So what’s so puzzling? Two things. First, it’s absent from
artificial languages, in contrast to, say, unbounded hierarchy and long
distance dependency (think operator-variable binding). Second, it’s not
obviously functionally necessary (say
to facilitate comprehension). For example, there is no obvious reason to think
that Chinese or English speakers (there is comparatively little morphology
here) have more difficulty communicating with one another than do Georgian or
Athabaskan speakers, despite the comparative dearth of apparent morphology. In
sum, morphology does not appear to be conceptually or functionally necessary
for otherwise we (I?) might have expected it to be even more prevalent than it
is. After all if it’s really critical and/or functionally useful then one might
expect it to be everywhere, even in our constructed artificial languages. Nonetheless, it’s pretty clear that NLs
hardly shy away from morphological complexity.
Moreover, it appears that kids have relatively little
problem tracking it. I have been told that whereas LADs (language acquisition
devices, aka: kids) omit morphology in the early stages of acquisition (e.g.
‘He go’), they don’t produce illicit “positive” combinations (e.g. ‘They
leaves’). I have even been told that this holds true for languages with rich determiner
systems and noun classes and fancy intricate verbal morphology: it seems that
kids are very good at correctly classifying these and quickly master the
relevant morphological paradigms. So,
LADs (and LASs; Language Acquisition Systems) are good at learning these
horrifying (to an outsider or second language learner) details and at deploying
them effectively as native speakers. So, again, why morphology?
Unburdened by any knowledge of the subject matter, I can
think of four possible reasons for morphology’s ubiquity within NLs. I should
add that what follows is entirely speculative and I hope that this post
motivates others to speculate as well. I would love to have some ideas
to chase down. So here goes.
The first possibility is that visible morphology is a
surface manifestation of a deeper underlying morphology. This is a pretty
standard Generative assumption going back to the heyday of comparative syntax
in the early 80s. The first version of this was Jean-Roger Vergnaud’s
(terrific) theory of abstract case. The key idea is that all languages have an
underlying abstract case system that
regulates the distribution of nominal expressions. If we further assume that this abstract system can be phonetically externalized, then the seeds of visible
morphology are inherent in the fundamental structure of FL. The general
principle then is that abstract morphemes (provided by UG) are wont to find
phonetic expression (are mapped to the sensory and motor systems (S&M)), at
least some of the time.
This idea has been developed repeatedly. In fact, the
following is still a not an unheard of move: We find property P in grammar G
overtly, we then assume that something similar occurs in all Gs, at least
covertly. This move is particularly reasonable in the context of “Greed” based
grammars characteristic of early minimalism. If all operations are “forced” and
the force reduces to checking abstract features, then using the logic of abstract
case theory, we should not be surprised if a GL expresses these
phonetically.
Note that if something like this is correct (observe the if), then the existence of overt
morphology is not particularly surprising, though the question remains of why
some Gs externalize these abstracta and some remain phonetically more mum. Of late, however, this Greed based approach
has dimmed somewhat (or at least that’s my impression) and generate and filter
models of various kinds are again being actively pursued. So…
A second way to try and explain morphology piggy-backs on
Chomsky’s recent claims that Gs are not pairings of sound and meaning but pairings of meanings with sound. His general idea is that whereas the mapping from
lexical selection to CI is neat and pretty, the externalization to the S&M
systems is less straightforward. This comports with the view that the first
real payoff to the emergence of grammar was not an enhancement of communication
but a conceptual boost expanding the range of cognitive computations in the
individual, i.e. thinking and planning (see here).
Thus externalization via S&M is a late add-on to an already developed
system. This “extra” might have required some tinkering to allow it to hook
onto the main lexicon-to-CI system and that tinkering is manifest as
morphology. In effect then, morphology is family related to Chomsky and Halle’s
old readjustment rules. From where I
sit, some of the work in Distributed Morphology might be understood in this way
(it packages the syntax in ways palpable to S&M), though, I am really no
expert in these matters so beware anything I say about the topic. At any rate,
this could be a second source for morphology, a kluge to get Gs to “talk.”
I can think of a third reason for overt morphology that is
at right angles to these sorts of more grammatically based considerations.
There are really two big facts about
human linguistic facility: (i) the presence of unbounded hierarchical Gs and
(ii) the huge vocabulary speakers have.
Though it’s nice to be able to embed, it’s also nice to have lots of
words. Indeed, if travelling to a
foreign venue where residents speak V and given the choice of 25 words of V
plus all of GV or 25,000 words of V plus just the grammar of simple
declaratives (and maybe some questions), I’d choose the second over the first
hands down. You can get a lot of distance on a crappy grammar (even no grammar)
and a large vocabulary. So, here’s the
thought: might morphology facilitate vocabulary development? Building a lexicon is tough (and important)
and we do it rapidly, very rapidly. Might overt morphology aid this process,
especially if word order in a given language (and hence PLD of that language) is
not all that rigid? It could aid this
process by providing stable landmarks near which content words could be found.
If transitional probabilities are a tool for breaking into language (and the
speech stream, as Chomsky proposed in LSLT and later rediscovered by Aislin,
Saffran and Newport), then having morphological landmarks that
probabilistically vary at different rates than the expressions that sit within
these landmarks then it might serve to focus LADs and LASs on the stuff that needs
learning; content words. On this story, morphology exists to make word learning
easier by providing frames within a sentence for the all-important lexical
content material.
There is a second version of this kind of story that I would
like to end with. I should warn you that it is a little involved. Here goes.
Chomsky has long identified two surprising properties of NLs. The first is
unbounded hierarchical recursion, the second is our lexical profligacy. We not
only can combine words but we have lots of words to combine. A typical
vocabulary is in the 50,000 word range (depending on how one counts). How do we
do this. Well, assume that at the very least, each new vocabulary item consists
of some kind of tag (i.e. a sound or a hand gesture). In fact, for simplicity
say that acquiring a word is simply tagging it (this is Quin’s “museum myth,”
which like many myths may in fact be true). Now this sounds like it should be
fairly easy, but is it? Consider
manufacturing 50,000 semantically arbitrary tags (remember, words don’t sound
the way they do because they mean what the do, or vice versa). This is hard. To do this effectively requires
a combinatoric system, Indeed, something very like a phonology, which is able
to combine atomic units into lexical complexes. So, assume that to have a large
lexicon we need something like a combinatoric phonology and the products of
this system are the atoms that the syntax combines into further hierarchically
structured complexes. Here’s the idea: morphology mediates the interactions of
these two very different combinatoric systems.
Meshing word structures and sentence structure is hard because the modes
of combination of the two kinds of systems are different. Both kinds play
crucial (and distinctive) roles in NL and when they combine morphology
happens! So, on this conception,
morphology is not for lexical acquisition, but exists to allow words with their
structures to combine into phrases with their structures.
The four speculations above are, to repeat, all very
speculative and very inchoate. They don’t appear to be mutually inconsistent,
but this may be because they are so lightly sketched. The stories are most
likely naïve, especially so given my virtually complete ignorance of morphology
and its intricacies. I invite those of you who know something about morphology
to weigh in. I’d love to have even a cursory answer to the question.
My speculation is that there is a dumb linear pattern recognizer of the sort that dogs seem to use to generate their expectations of what will happen next (2d state transition matrix, perhaps), and that morphology is when it does its thing, arising by originally syntactic productions falling under its sway.
ReplyDeleteI think this post runs the risk of collapsing two distinct points of crosslinguistic variation: morphological typology (i.e. the synthetic vs. analytic distinction) on the one hand, and the set of functional expressions on the other. These parameters can vary independently of one another, though, so they shouldn't be collapsed.
ReplyDeleteJudging by the languages you dichotomize (English/Chinese vs. Georgian/Athabaskan), it seems as though the question in the title ought to be "why BOUND morphology?". This is undoubtedly a worthwhile question, but it doesn't seem like the one you're really interested in here. In your discussion of the 'first possibility', it seems your quarry is the question of why abstract morphemes (/functional heads) should enjoy a greater degree of morphological exponence in one language as compared to another. However, we know that languages can differ in HOW they express particular categories (/abstract morphemes/functional heads) without necessarily differing in WHAT they express (i.e., which categories).
Perhaps this was an accident of the example languages you chose, i.e. you could just as easily have exemplified the "lots-o'-morphology" class of languages with a decidedly analytic language -- say, Atong (Tibeto-Burman), which has a huge set of free functional morphemes. This would refocus the discussion onto the question of why languages seem to exhibit striking variation in their overall degree of functional exponence.
However, if we take the trajectory of work like Cinque (1999) seriously, perhaps this question is based on a false dichotomy: while it's true that a far greater number of functional heads are realized morphologically in languages like Georgian/Athabaskan (and Atong) as compared to English/Chinese, the latter language type "compensates" with an equally large class of (adverbial) expressions realizing the specifiers of those silent functional heads. As an isolated example, take evidential information: whereas this is realized as a set of overt bound morphemes (i.e. functional heads) in a language like Quechua, it is realized as a set of adverbials (i.e. functional specifiers) in a language like English. If this approach is on track, perhaps there is no significant difference in the degree of functional exponence (or, indeed, expressive power) across languages; there is only a difference in how such structures are realized. (This is obviously an idealization: I'm not prepared to say, for example, that English has an adverbial specifier for each of the honorific heads realized in e.g. Korean, although this isn't completely inconceivable...)
I bet I am running together lots of things. I guess I was assuming that functional heads are wont to "morphologize." That's how I thought they might be tied together. The model is case theory. Now I agree that there may also be a selection among functional heads from a universal inventory that might add another layer of complexity. But, as I noted, I don't know much here (hence my speculation is factually un encumbered). Are you suggesting that this eliminates the first option mooted?
DeleteI think ‘Why morphology?’ should perhaps be replaced with a series of ‘why’ questions for a range of morphological phenomena that don't have much in common theoretically. E.g., ‘Why grammatical gender?‘; ‘Why fusion?’ I'd be surprised if there's a single answer to both of those questions. Morphology just seems to be a particularly rich source of examples of apparently unnecessary complexity.
DeleteAny thoughts about the smaller why questions? If we break the large question down, what stories can we tell?
Delete"First, [morphology]'s absent from artificial languages, ..."
ReplyDeleteMaybe, maybe not. It's difficult at times to think what this would be like in a programming language. (As an aside, Jonathan Kaye makes a very similar argument about phonology. Basic has a PRINT statement, but we don't geminate the final T of PRINT to match the first character of the variable name, it's PRINT FOO not PRINF FOO. So why do natural languages bother with phonology?) The closest analog I can think of within programming languages is the concept of naming conventions.
There are a few that I can think of that might be "like" morphology. In Scheme it is a convention to name a function with side-effects (or that is destructive) with a final !, as in set!, set-car! and so on. In Python a single leading underscore denotes a internal/private variable, i.e. one not to be used directly from outside the module. And "circumfixing" double underscores are "magic" variables (similar to the use of circumfixed asterisks in Common Lisp). In (POSIX) C the "suffix" _t is reserved for new type names. And an older C convention (originating in K&R probably) is that macros (introduced by preprocessor directives) are written in all caps, like FILE or EOF (akin to morphological emphasis?). In Haskell types start with capitals and functions start with lowercase. And, for an ancient example, the (implicit) type of a variable in Fortran was determined by the first letter of the name, INTEGER for I through N, REAL otherwise.
So morphology is to be expected given that even artificial languages have them. Fine, what does it do? Why should we expect it? Also, are there forms we see in NL absent from artificial languages and vice versa?
DeleteI'm not sure myself what to make of those similarities that I noted. I do have an idea what they're "for" in the programming language case, but it's a very old-fashioned view of the problem. They provide a set of (sometimes extensible) types which are what the syntax of the language traffics in. But this is just model theory a la Tarski--a smallish set of categories and combinatorial rules to define wff. So "morphology" (naming conventions) are there to aid in typing the (sub-)expressions. But I don't think amounts to any kind of satisfying answer to the questions you are asking.
DeleteSounds more like why categories, e.g. N, V, A etc. are phonologists curious about why morphology exists or is I something that one just assumes as a matter of course?
DeleteWhat happens if we ask the question from the opposite direction: Why sentence-syntax? Why are there word boundaries? Why isn't each utterance in every language a word? Why don't we all speak Chukchi?
ReplyDeleteWe do ask why syntax. We ask where it comes from, why it has the properties it has, how it fits with other interfaces etc. it's for this reason that I was wondering about morphology. Why is there so much of it in NL. I took this as analogous to asking why we have displacement (one answer: merge makes it inevitable) why we have the restrictions we find (computational efficiency is one proposal), why the hierarchy (to interface with CI which uses this hierarchy for mapping to "logical forms" readable by CI. Now, these proposals are all tentative, but it's useful to think about this. Ok, let's do the same for morphology. Why is there so much/little and what does it exist? Maybe you find the existence of this obvious. Ok, why?
DeleteOf course we're asking 'why' questions about everything, syntax included. But your posting suggests (maybe I misread it) that the existence of morphology is a tougher conundrum at present than sentence-syntax. You gave reasons:
Delete"First, it [morphology]’s absent from artificial languages, in contrast to, say, unbounded hierarchy and long distance dependency (think operator-variable binding). Second, it’s not obviously functionally necessary (say to facilitate comprehension). "
There are presuppositions here that I think are probably correct, but might be interesting to examine. For example, I share your intuition that the units of artificial languages map naturally onto the words and higher-level constituents of sentence-syntax, and not onto morphemes and their higher-level constituents below the word level. Otherwise, we could say "sentence syntax is absent from artificial languages" (they only have morphology). But why does that feel weirder to us? Like you, I can spin out some answers (do we find word-internal variable-binding?), but they're worth pondering, I think. It may be that a more pointed version of your question might be: why do natural languages have "choke points" like the word and phase, across which a variety of processes do not apply, but within which Merge does its stuff in a fairly uniform fashion.
First, I intended nothing invidious. It's precisely because morphology seems so ubiquitous that I hope for a good reason for its presence. Also, like you (in your rewording) I find the need for some kind of combinatorics obvious, the question is why there are different kinds. One answer is to deny that they are different, I.e. morphology is just syntax properly viewed (mirror principle reasoning is like this, I believe). Another is that they are different, but both are necessary. The job then is to provide the reasons. I like the "choke" point terminology, it's what I was pointing to in the fourth option. So, with this change accepted: why these different kinds of systems? Does rephrasing it this way make for some fruitful speculations, or any?
DeleteThere is a functional advantage of a rich inflectional morphology – it allows for easy topic - focus articulation. As to derivational morphology, some seems to be necessary if you have a vocabulary of 50,000 and about two thousand roots.
DeleteWhy are they different? Perhaps due to extra-linguistic factors. A rich inflectional morphology can develop in a linguistically homogeneous community with little dynamics. On the other hand much of overt inflection may disappear (and little will develop) under dynamic conditions, particularly when different linguistic communities get often into contact, merge etc.
First, morphology is not absent from artificial languages, for example, Esperanto uses it extensively.
ReplyDeleteSecond, morphology is not essential, but it's also not useless. While it may seem redundant in languages that show remnants of an eroding case system, it's very useful when it's still functional. Languages with complex morphology tend to avoid ambiguity better than languages with simple morphology and are easier to parse. Languages with complex morphology tend to state more details explicitly, while languages with simple morphology tend to rely heavily on context. Complex morphology may also allow to say the same thing with fewer words.
Third, it's often not clear what is and what isn't morphology. Most languages don't mark word boundaries in any way, so it's often ambiguous where words start and end, but the analysis is usually very biased to reflect written lagnuage. For example, pronominal clitics in most romance languages follow very strict rules and they could be plausibly analyzed as suffixes for polypersonal agreement.
So are you saying that there is no reason for it. NL would function just as we'll without it and there really is no cause for it, it's just an add on that might be useful but not really that big a deal?
DeleteDid you read my post?
DeleteYup, but clearly did not get your point. It sounds like you don't think that there is a deep function for it, it just helps out but is dispensable. In other words, an accident. I may be misinterpreting you.
DeleteAs for Esperanto, well it's not the kind of artificial language I had in mind.
The idea that morphology (bound and unbound) aids word learning is a tempting one that I've heard before from a few colleagues in a speculative mood. It would predict that there is a correlation between morphological inventory and the number of words learnt at a given age. In other words, English or Chinese speaking kids should possess less words on average at age 4 or 6 than kids speaking Georgian or Athabaskan, keeping as usual all other variables relatively unchanged. More morphology = more words at age x. That should be testable. I am however skeptical that any significant differences would be found. I think speculations #1, #2 and #4 all hold some promise, but they would indeed require considerable operationalization of the subquestions.
ReplyDeleteI'm not sure the envisaged experiment is the right one. Morphology can help with word acquisition. So all things equal a language with such should allow more rapid word acquisition. However, matters are not generally equal. Complex morphology often goes with more liberal word order (so I am told). But strict word order also can serve to provide landmarks. So what we want is to find a language with lots of morphology and strict word order to see what happens. I admit that I too am skeptical that there is much of a boost given that kids learn words in English so quickly, meaning, I take it, that whatever they get here is more than enough. But, maybe in free word order languages they don't get what the English kid does and so need other fixed points. Maybe. That's the best case I can think of for this option.
DeleteIt seems like icelandic vs English should suffice: rich vs poor morphology, and fixed word order.
ReplyDeleteIcelandic is more flexible than English, but really does seem to have positions for the main GRs (Halldór Sigurðsson' s latest on Lingbuzz is a very nice reminder of this), while it seems plausible to me to think that Md Greek really doesn't, in spite of having fewer cases. Greek does, like Warlpiri, have topic & focus positions at the right periphery, but, in spite of the very interesting work that people like Axiotis Kechagias have done on it, I don't get a strong sense of firm syntactic positions for GRs and generally further to the left.
DeleteI thought that there was quite a bit of Topicalization in Icelandic, no? All that V2 stuff? Isn't there also quite a bit of mittlefeld movement?
ReplyDeleteSure, but surely that does not free word order make?
ReplyDeleteProbably not. Do you know any data on this? Relative speed of lexical acquisition?
DeleteI've only just been alerted to this thread. As a non-Minimalist and non-syntactician (a humble morphologist, in fact), can I just ask for a clarification? I have assumed for a long time that it's impossible to draw a distinction between words and phrases in Minimalism: you have lexical roots and functional heads and these get combined by Merge and then the result is spelled out by Distributed Morphology, but words are merely an optical/auditory illusion (e.g. Julien, M. 2002 Syntactic Heads and Word Formation). Likewise, 'cat' is a phrasal idiom. So what is this blog about? Or has Minimalism recently introduced a word/phrase distinction without my being aware of it? And if so, what on earth does it look like?
ReplyDeleteNot sure what you are asking. The post jsut assumed that there is something we can descriptively call 'morphology.' The question is why it should exist as one might imagine languages without such (e.g. Chinese has less than most it is claimed and artificial languages have very little). This is analogous to other 'why' questions: why locality conditions in syntax, why c-command for antecedence, why X' structure in phrases? Analogously: why morphology?
ReplyDeleteI'm not sure where your remark fits in here. One thing: minimalism and DM are not quite the same thing. One can imagine one without the other. So far as I can tell, Minimalism per se has no dog in the fight you allude to, at least not to my knowledge.