Sunday, July 14, 2013

Levels of Adequacy in Generative Grammar


REVISED July 15/2013

First off, thanks to Ben, David and Andrew for correcting me. The discussion is not in Aspects as stated below (or at least in the way I recalled: but see chapter 4 for some discussion), but in Current Issues in Linguistic Theory. So don't fret if in your (re)reading of Aspects chapter 1 you miss it.  Ben also asked that I expand the discussion a bit of the relation between observational and descriptive adequacy. I have added something at the end to expatiate a bit on the distinction. Thanks to Avery for adding to this. I encourage people to look at the comment section for further enlightenment.

****

In the last several posts I have defended the position that one of the benchmarks for theory evaluation ought to be how well a given proposal helps answer a central why question. Plato’s Problem lays out the central puzzle in GB, and it is worth reviewing how Generative Grammar (GG) incorporated concern for both low-level observational details and higher level problems in evaluating ongoing proposals. The locus classicus is, of course, in Aspects chapter 1.

BTW, before going on, I found to my dismay that students no longer read Chapter 1 as a matter of course. In teaching intro to Minimalism here at the LSA summer institute, I ran a random poll of the class and discovered that much less than a third had ever read it.  Given that this is one of the seminal documents in GG and lays out many of the basic foundational premises as clearly as has ever been done, this is just nuts. So, if you dear reader have never read it (in fact multiple times (aim for about 50)), then stop reading this silly blog now and go and read it!!

Ok, to resume: In Aspects Chomsky distinguished three levels of empirical adequacy:

(i)             Observational Adequacy
(ii)           Descriptive Adequacy
(iii)          Explanatory Adequacy

In the context of Plato’s Problem (PP) they were interpreted as follows: a proposed grammar G of a language L (i.e. GL) is observationally adequate if it covers the relevant data points (generally consisting of acceptability(-under-an-interpretation) judgments). A GL is descriptively adequate if it accurately describes the internal (steady) state of a native speaker of L. Last, a theory of UG is explanatorily adequate when it, in combination with the PLDL (the primary linguistic data of L that the child uses to attain GL) can derive the descriptively adequate GL. We can then extend the notion of explanatory adequacy to GL as well: the proposal is explanatory just in case it covers the relevant observational data and is derivable from UG given PLDL.

Note the important distinction between observational and descriptive adequacy. The evaluation of GLs faces in two directions: towards the sentential observations/judgments that derive from descriptively adequate Gs and towards FL from which these Gs descend.  Thus, descriptive adequacy commits hostages to the structure of FL for a proposal fails to be such if it fails to grow out of FL when appropriately watered with PLDL. In other words, covering the sample data was never considered in itself sufficient for attaining descriptive adequacy. This required attention to all three levels, i.e. to cover the data with the “correct” grammar derived from an adequate UG.

I always thought that one could add a forth level to this, doing for UG what (ii) does for particular Gs:

(iv)          Explanatory Descriptive Adequacy

This last would select out the actual UG we have, not merely one capable of deriving GL from PLDL + some UG.  However, given how hard the problem of finding any explanatorily adequate UG, it makes sense to be satisfied with theories of UG that can manage (iii) (to my knowledge, none has yet been produced, though there are sketches). I return to (iv) below.

Not surprisingly, within the context of PP all three levels play an important justificatory role for particular analyses. Thus, an account that fails to “cover” the relevant judgment data, is less well regarded than one that does. So too, if a particular story covers these data but does not describe our actual G, is justly discarded.  One relevant question is how one might know that an account that is observationally adequate is nonetheless possibly false.  One way is that it fails to comport with what we take to be features of UG (others include being “too complex and inelegant,” “missing a generalization,” etc.[1]).  So, for example, we might find a theory that covers the relevant language particular data but invokes operations that are UG inconsistent, something we discerned in studying the Gs of other Ls.  As we assume that any given Gs must be structurally consistent with other Gs (e.g. the G of English can constrain what we take to be a possible rule in the G of Japanese), we can argue that Gs that are observationally adequate (i.e. cover the relevant facts within any given L) are nonetheless inadequate as they invoke operations or principles inconsistent with what we find in other Gs.  As any child, we assume, can learn any language, the G that it actually (accidentally) acquires should look similar to the other Gs that it could have acquired. In this way, the larger explanatory goal, discovering the structure of FL/UG, impacts our evaluation of candidate proposals.

A similar three level scheme with a regulative role for Darwin’s Problem can be developed for the Minimalist Program (MP). Recall that the aim is to understand why we have the FL/UG we have and not some other. So a MP proposal will be observationally adequate if it replicates the right “laws of grammar.” (e.g. the principles within GB are “roughly” observationally adequate). An MP proposal will be descriptively adequate if they describe the actual FL/UG that is our biological endowment and a theory will be explanatorily adequate if combined with a plausible evolutionary scenario it can account for how the descriptively adequate FL/UG arose in the species. I have elsewhere discussed various intermediate projects that could push this explanatory project forward. Here I just want to note that as in the discussion above, it is useful to have the higher- level explanatory goal regulate the evaluation of specific proposals.

I hope it goes without saying that Plato’s and Darwin’s Problems are pursued in tandem and not seriatim. Both are looking for the correct FL/UG. It’s a factor in evaluating descriptive adequacy in the context of Plato’s Problem and it is the object of inquiry in pursuing Darwin’s.  Observational concerns affect explanatory ambitions and, hopefully, the reverse is true as well.  Thus, the projects are not separate, but go hand in hand.  Nonetheless, it is interesting to note that as Chomsky likes to say “from the earliest days of Generative Grammar” between observational work and higher level questions that this work was in the service of pursuing. Thus, the minimalist emphasis on explanation and the insistence that it play a central role in theory evaluation is hardly a GG novelty. It was always such, and now it is such in spades.

Addendum:

Here's how I have understand the distinction between observational and descriptive adequacy. Chomsky draws an implicit distinction between grammars that cover the relevant data of interest and grammars that accurately describe a speaker's mental grammatical state.  The former grammars are observationally adequate, the latter descriptively adequate.  Now, as Avery nicely observes, there are even levels of observational adequacy: does it cover the right range of acceptability judgments, get the right interpretations (theta roles, scope, binding), does it "capture" the apparent generalizations. Grammars that do so are observationally adequate. Note, this is not a trivial hurdle.  A good observationally adequate grammar is nothing to sneeze at.  Even more so, when as I suggested, that we extend the same courtesy to UGs in the context of MP. However, being observationally adequate is one virtue, being descriptively adequate another. A Grammar of L attains descriptive adequacy if it correctly describes the mental state achieved by a competent speaker of L (actually an ideal speaker-hearer, but let's put that aside).  What makes for the gap between observational and descriptive adequacy?  Well, being acquirable as a product of FL.  Descriptively adequate grammars are those that FL/UG generate on the basis of PLD of L.  So, among the possible observationally adequate grammars, the descriptively adequate ones are those that are products of FL. Thus descriptive adequacy adverts to the concerns of Plato's Problem, a sketch of the latter bearing (at least implicitly) on the step from observational to descriptive adequacy.

If this is correct, then from the earliest days of GG, Chomsky has insisted placing the evaluation of grammatical proposals in the widest possible context, with Plato's Problem (a later dubbing) playing a central role. This role became more prominent, in my view, as we made more progress. However, it was there from the start.  Minimalism has enriched this evaluative measure yet again by highlighting various other factors (e.g. Ockham, Darwin's Problem, etc.) and including them as relevant measures of theory evaluation. Of course, there is no algorithm for weighing these factors and trading them off against one another when they conflict. But that's par for the course.  That's a domain for scientific judgment, not mindless rules of method. However, it is easy to ignore the more abstract concerns on favor of covering he more easily available data points. The discussion of levels of adequacy is a useful conceptual reminder that the fish we are trying to fry is a very big one and that there are many things that go into frying it.




[1] For a classic argument like this see Chomsky’s discussion of the inadequacy of Phrase Structure Grammars as one’s that clearly are too cumbersome, redundant and miss obvious generalizations. In effect, they fail to be compact enough.

35 comments:

  1. Just out of interest, where exactly in Aspects is it that Chomsky discusses observational adequacy?
    (I myself would probably have pointed to "Current Issues in Linguistic Theory" as the locus classicus )

    ReplyDelete
    Replies
    1. I had the same question, since "Current Issues" is what I cite when I discuss this stuff in class. So I just checked (I have a scanned OCR copy of chapter 1), and indeed he reviews the distinction succinctly from the bottom of page 26 through the top of page 27.

      Delete
    2. Perhaps I'm being blind, but I can't find anything about observational adequacy at those pages in Aspects (print version), or indeed anywhere in the chapter.
      Descriptive and Explanatory adequacy appear in the index, but not Observational.

      Delete
    3. No, you're not blind - I was careless in reading Benjamin Boerschinger's question, and you're right. I think it is in Current Issues, not Aspects, that he goes through all three levels of adequacy.

      Delete
    4. This comment has been removed by the author.

      Delete
    5. Having spent several (i.e. three) months staring at the specific paragraphs in Aspects, my retina scars and I can here clarify that the three Levels of Adequacy were quite specifically mentioned as Enumerative, Descriptive, and Explanatory.

      If departure from the canon is permitted in a retrospective such as this, in my view at the time, it became possible from a Philosophy of Science / Khunian framework to read Chomsky's characterization at surface structure as indexing the state of the Linguistic Art across a diachronic transition from Logical Positivism ("tag and bag" enumeration) through Empirical Descriptivism ("emergent properties" analyses, replete with informant-specific categories and similar burdensome "description' mechanisms) to what today might be called algorithmic/programmatic 'explanation' (essentially Post grammar [1936] , where recursion, rewrite rules and the notion of a Linguistically Significant Generalization, etc.were drivers), in essence starting the generative tradition.

      All of this to my eyes seemed at surface a specific message for his fellow practitioners - word lists and paired sentences certainly have their place in language description, but one should in any case aim for explanation,

      For those still reading this, it was the deeper structure rather than the levels inventory outlined above -- specifically the discussion (at the bottom of an ensuing even numbered page [the number of which has fortunately faded from recollection!]) of Strong Generative Adequacy, which to this day I consider a most definitive characterization of computational parsimony that had me captivated for so many months.

      Delete
  2. Ah, early onset Alzheimer's. thx for the correction.

    ReplyDelete
  3. n Current Issues, beginning section 2 (p 62-63 in the Fodor & Katz anthology version), descriptive adequacy seems to be defined in two subtly different ways:

    a) giving a correct account of linguistic intuition,
    especially, in the sequel, for syntax, of intuitively
    perceived grammatical relations (probably best thought
    of these days as something pretty close to theta roles).

    b) specifying the observed data "in terms of significant
    generalizations that express the underlying regularities
    of the language".

    (b) seems to me to break down further into i) getting the right predictions about yet-to-be-observed behavior ii) getting the factorization of the description of behavior correct. A grammar predicting that you could say 'ég(N) vonast til að vanta ekki einan(ACC) í tímanum'in Icelandic would be achieving i). Using the same formation principles for NPs in subject and object position would be acheiving ii).

    The Aspects discussion seems to me to be a bit different, primarily directed towards (a), ignoring (b) (unless I've missed something, very possible).

    So, I think the levels are much more subtle and complex than people used to seem to think, which might be one reason why students don't read this stuff any more, I always found it very difficult.

    ReplyDelete
  4. I always found it curious that Chomsky 1965 does not even mention "observational adequacy" (unless I missed it repeatedly, but apparently it really wasn't just me). Is it just by accident that observational adequacy didn't make it into Aspects?
    I gather most people (still) think there is an interesting three-way distinction to be had (well, these days, a four way distinction), although I never fully understood how exactly observational adequacy could be different from descriptive adequacy if it's not merely capturing the (rather uninteresting) acceptable/unacceptable distinction.

    I fully agree that this doesn't get enough discussion these days, so perhaps Norbert could elaborate some more on the difference between observational and descriptive adequacy?

    ReplyDelete
    Replies
    1. From Current Issues: "Within the framework outlined above, we can sketch various levels of success that might be attained by a grammatical description associated with a particular linguistic theory. The lowest level of success is achieved if the grammar presents the observed primary data correctly. A second and higher level of success is achieved when the grammar gives a correct account of the linguistic intuition of the native speaker, and specifies the observed data (in particular) in terms of significant generalizatons that express underlying regularities in the language..."

      I think there's a close relationship between this contrast and the E-language/I-language distinction that Chomsky introduced twenty years later. (See the interesting footnote on "observational adequacy" here: http://bit.ly/1bg0TeS. It's depressing that this still needs to be stressed again and again.) But it's also the normal linguist's activity of trying to extract generalizations from data, and testing these generalizations against new data. The initial description of the project might meet the level of success he called "observational adequacy" by presenting a data set accurately, and the successful outcome of the project might achieve "descriptive adequacy" by noting that new data can be predicted if the language is said to have a devoicing rule, or wh-movement to Spec,CP, or island conditions. Part of the shtick is the fact that moving from observational to descriptive adequacy was part of linguistics long before Chomsky (didn't the structuralists invent the linguistics problem set?), but the next step to what he called "explanatory adequacy" was an utterly novel contribution:

      "...A third and still higher level of sucess is achieved when the associated linguistic theory provides a general basis for selecting a grammar that achieves the second level of success over other grammars consistent with the relevant observed data that do not achieve this. In this case, we can say that the linguistic theory in question suggests an explanation for the linguistic intuition of the native speaker. It can be interpreted as asserting that data of the observed kind will enable a speaker whose intrinsic capacities are as represented in this general theory to construct for himself a grammar that characterizes exactly this linguistic intuition."

      Delete
    2. Thanks, David and Norbert, for the additional comments.
      I like the idea that identifying "proper" generalizations is what might separate a "merely" observationally adequate from a descriptively adequate grammar (although I'm not sure Norbert actually endorses this?).

      I still find it hard to see, though, how a grammar could "present the observed primary data correctly" even if it doesn't give "a correct account of the linguistic intuition of the native speaker". Isn't the native speaker's intuition our yardstick for what counts as a correct presentation of the primary data?

      What I could imagine is capitalizing on "observed" here and, for example, say that a finite list (or rather, a "grammarized" list) of accurate structural descriptions of some specific set of utterances (say, for lack of a better example, the Penn Treebank, ignoring whether or not it actually provides proper descriptions for its English sentences) could be observationally adequate, in so far as it matches native speakers' intuitions with respect to _these data_. But this grammar would remain utterly silent with respect to anything not in the list, thus failing to provide a proper account of (all of) the linguistic intuition of the native speaker which, of course, applies to a much larger (if not infinite) number of utterances.

      But this seems like a rather artificial case, hardly worth mentioning (which, of course, suggests that I got it wrong).

      With respect to Norbert's idea that being acquirable through UG is the hallmark, I fail to see that in any of the quotes given (which is, of course, not a deep point --- nothing is more boring than arguing over the exact words somebody put down a couple of years ago). It's certainly an interesting interpretation but, perhaps like Alex, it makes me worry that then, for us to distinguish an observationally from a descriptively adequate grammar we would need an explanatorily adequate theory of UG already.

      Delete
    3. To distinguish OA from DA G we need to consider the structure of UG. Yes. Does it need to have it already? No, any more than we need ALL the data to make evaluations. Rather the process of arguing for a DA G commits hostages to views of UG, at least implicitly and this is good. All that I want noted is that BOTH kinds of considerations are important and relevant.

      Delete
    4. thanks for the additional clarification. I'm still a little bit puzzled, though, as to how an observationally adequate grammar could manage to "correctly" lay out the facts of an individual language if it went against the structure of UG.
      Take Avery's nice example about NP-structure --- arguably, assuming eight distinct NP-rules is not only unsatisfactory once you take into account additional languages, it shouldn't count as _correctly_ representing the data even for that single language?

      (Just to make sure this gets across, I'm not at all opposed to taking what you call "descriptively adequate" as something we ought to aim for in coming up with grammars for particular languages, I just have a hard time imagining what an observationally adequate grammar that failed to be descriptively adequate would look like. So rather than to any substantial disagreement, I think it might really just boil down to the question of what we'd like to call what.)

      Delete
    5. The problem with rejecting the 8 copies of the NP rule idea out of hand, instinctively, is that there are messier cases that arise, especially in phonology. So for example once upon a time in the history of English, the tense-lax alternations that are diachronically the source of alternations such as serene-serenity, divine-divinity, etc were simple changes in a single feature, obviously correctly describable as a single rule (trisyllabic laxing, iirc). Without a rule-and-representation format whereby one rule applies naturally to all the relevant vowels, we cannot, for example, understand why the same change came to affect all these vowels at more or less the same time.

      However, after the great vowel shift, the phonetic naturalness of the rule disintegrates, and it becomes much less clear whether there is a single rule applying to all the affected vowel qualities, which requires a rather abstract analysis to make it work (as presented in SPE), or whether the internally represented system is messier. Jeri Jaeger in her PhD thesis found that the alternations supported by spelling were applied much more productively than the one that wasn't (profound/profundity), which suggests a certain amount of mess to me, at a minimum not having the same rule for divine and profound.

      So, I think we need some explicit thinking, which is, at bottom, typological/diachronic (and statistical: what is the joint probability of 8 versions of the German NP structure arising more or less simultaneously? Rather low, I would imagine).

      When I talked about this once upon a time with Patrick Suppes (in the late nineties), who had proposed an analysis of English where quantifers were external to their NPs, just placed next to them by the PS rules in all the places where the NPs occurred (because this made the semantic easier), he justified himself by observing that the amount of generalization captured by the linguistically 'correct' formulations as opposed to the 'incorrect' ones was very small change indeed compared to what he perceived in the more established natural sciences; this is I think true to some extent for English alone, but not when you're generalizing over thousands of languages and also documented diachronic changes. (I don't think I was fast enough on my feet at the time to bring this up then).

      Delete
    6. That is an interesting argument -- I always took the argument against redundancy in grammars to be a learnability one -- namely big grammars are harder to learn than small grammars (which isn't a good argument). But you are right that there is a good typological one -- in Bayesian terms the prior probability of a grammar with 8 copies of the same rule is low, and in common sense terms if we have a grammar like that we need some sort of explanation of why we have the same rule type reoccurring, and one of the only good ones is that it is just one rule that occurs in 8 places.

      (I realize now that I am just paraphrasing your argument without adding anything new ... )

      Delete
    7. Avery: I will try to keep this short so as to not derail the thread. There are many differences between syntax and phonology. One of the most profound, IMO, is the following. There is not a shred of evidence for syntactic displacement that is not structure-dependent. But descriptive grammars of real languages are chock full of phonological processes in which a disjunction of natural classes (as currently conceived) either undergo or trigger change (the SPE laxing rule being a good example). Compared to syntax, where poverty-of-the-stimulus arguments of the sort articulated by Chomsky strengthen the case for "structure", there are also very few attempts to make arguments of this type re: phonological features. As Jeff Mielke points out in his book (also OSU dissertation), the innateness of phonological features seems to have been a belief of Jakobson and taken up uncritically by his students.

      Jaeger might be right that there is no rule relating [aw] and [ʌ] (I have no dogs in that race), but her suggestion that phonology is parasitic on orthography is patently absurd when the world is full of hundreds of millions of illiterate adults who speak a language with dozens of interleaved morphophonological processes which they learned without any explicit instruction.

      Delete
    8. I think the/a Bayesian formulation might be the best way to make generative grammar intelligible and acceptable to a wider range of people than it has tended to be; the way I'd attempt to put it right now (being very new to the ideas, and not understanding much about the math (yet, or possibly ever)) is as follows:

      First, Bayesian language learners need to have an explicit formulation of a hypothesis space (aka grammar notation) and a prior over them (evaluation metric). Also a notion of fit of grammar to data that is more sophisticated what Chomsky proposed, but we ignore that for a moment.

      Next, we want our total account of language to be the one that assigns the highest probability to the way things are (facts of typology, change, learning data, whatever), relative to the currently known and well-formulated alternative accounts, and ones where the grammar representation can capture the 'obviously significant generalizations' such as those of noun phrase structure are clearly going to do better than those that don't (e.g. finite state grammars).

      I think this stands up and is valid even if you don't understand much of anything about how learning happens (your theory seems to be too permissive), or if you don't know how to make your favorite theory of grammar probablistic in the right way, so that it produces a data set with a given probability (P(D|G)).

      Although defining P(D|G) for linguistic theories of the kinds that linguists actually use seems to be a very very hard problem, I think it might be possible to put off actually solving it for a while (not forever) by the following trick (cheap and cheerful if you like it, cheap and nasty if you don't).

      Consider not the probability of the entire data set, but rather the probability of the utterances as expressions of their meanings (P(U1|M1)* ... *P(Un|Mn)), and estimate P(Ui|Mi) as the reciprocal of the number of different U's that your (non-probabilistic) grammar provides to express M (at first, you'd probably want to equate the meaning of an utterance with its 'referential content', or 'cognitive content', as George Lakoff used to call it, treating the discourse variants as optional variants).

      The above product is then the 'fit' term for Bayesian grammar determination (this is very close to an actually grammar debugging technique used in the XLE community, whereby, having parsed a sentence, you run the generator on its f-structure, and tune the grammar to reduce the amount of wrong c-structures you get).

      Delete
    9. @Kyle: I wasn't making a general claim that phonology is parasitic on orthography, but rather that some celebrated cases that loomed very large in the minds of people in the 60s and 70s were not as clear cut as often represented at the time.

      And the general question of how much of the appearance of interleaving is actually due to the successive accumulation of sound-changes + conservatism still seems to me to be a live issue that people have not managed to completely defuse. Such as forex the correct analysis of u-umlaut in Modern Icelandic: is the conditioning phonological as analysed by Steve Anderson in his thesis, or morphological (as the Icelanders seemed to think the last time I talked about it with them, which was admittedly a rather long time ago).

      Of course a lot of work in phonology has for a long time been going to areas where this kind of question doesn't arise at all. Clements' tone shift in Kikuyu discussion, for example, cannot be derailed by dragging in history, because the autosegmental analysis he's arguing for is exactly what seems to be needed to make the change he's discussing acceptably probable.

      Disjunctive triggering of processes is yet another issue; if this is happening suspiciously often, there needs to be some story about why, but it doesn't necessarily have to be one that hinges on the grammar representation; there are plenty of physical and functional possibilities that need to be considered too.

      Delete
  5. In these early discussions it seems that Chomsky is still not incorporating semantics directly into the data to be covered. Is that right?

    So observationally adequate could mean either
    a) defines the right infinite set of sound/meaning pairs
    or
    b) just defines the right set of infinite sequences of sounds -- so the boring +/- acceptability distinction that Benjamin mentioned.


    I find descriptive adequacy problematic as well. Norbert's gloss is undoubtedly correct, but if so, don't we also want the explanatorily adequate theory to *correctly* describe the process by which the grammar is selected from the alternatives?
    That is to say if we want a *correct* theory of the adult competence, don't we also want a *correct* theory of language acquisition?
    So it seems like OA and EA are different in type from DA.

    ReplyDelete
    Replies
    1. In these early discussions it seems that Chomsky is still not incorporating semantics directly into the data to be covered. Is that right?

      There are a few places in Aspects ch.1 where it does seem that the interpretations of sentences are part of the data to be covered:

      "A fully adequate grammar must assign to each of an infinite range of sentences a structural description indicating how this sentence is understood by the ideal speaker-hearer" (pp.4-5)

      "The syntactic component specifies an infinite set of abstract formal objects, each of which incorporates all information relevant to a single interpretation of a particular sentence." (p.16)

      "The phonological component of a grammar determines the phonetic form of a sentence generated by the syntactic rules. That is, it relates a structure generated by the syntactic component to a phonetically represented signal. The semantic component determines the semantic interpretation of a sentence. That is, it relates a structure generated by the syntactic component to a certain semantic representation." (p.16)

      "What is claimed, then, is that when given scattered examples from (16), the language learner will construct the rule (15) generating the full set with their semantic interpretations" (p.44)

      I agree that the sound-plus-meaning point of view suggested by these excerpts is not very clearly connected to notions like observational adequacy and descriptive adequacy, and therefore not very clearly connected to what Chomsky takes to be the data to be covered. Also, terminology like "generating the full set [of sentences] with their semantic interpretations" is unfortunately perhaps a bit vague; I think it is fair to paraphrase this as "generating the full set of sound-meaning pairs", though some may disagree. Finally, I find that there's also some vagueness about the role of a structural description, i.e. as an individuator of interpretations or as machinery for tracking state that's relevant for subsequent derivational syntactic operations (in practice, they seemingly do both, but the quote seems to suggest that it's all about individuating interpretations).

      But to the extent that the two options on the table are:
      (a) that observational adequacy means getting the right set of sounds/strings (i.e. the image of the set of well-formed syntactic objects, under the interpretation function defined by the phonological component), and
      (b) that observational adequacy means getting the right set of pairings of a sound/string with a meaning (i.e. the image under the pair of interpretation functions),
      then even in Aspects it seems to me that the latter view is probably what was intended. The third quote above, in particular, seems to suggest this more "symmetrical" picture, where the two interpretations are on an equal footing.

      Delete
    2. My understanding is that Chomsky advocated renouncing for a while the goal of finding the correct grammar in favor of just recognizing the best of some alternatives, because it is easier. This seem sensible, and some Bayesian work still follows this approach, such as Amy Performs on structure-dependance, if I understood it at all. Just a tactical decision.

      Delete
    3. Yes but these quotes are all about the structural descriptions rather than semantics or interpretations per se. Which is related to the weak/strong adequacy distinction.
      And while one could plausibly say that meanings are observable (with some caveats) one can't say that the structural descriptions are observable, so this has to be part of descriptive adequacy rather than observational adequacy.
      So one way of thinking of the OA/DA is distinction is as parallel to the weak/strong distinction. So on this interpretation (which I am not convinced is the right one), OA is just the boring +/- acceptable, and DA is getting the structures right.

      In terms of Norbert's useful terminology from the previous post, levels of adequacy are levels of having "marks of truth" rather than levels of being true.
      So DA seems an outlier if it actually about being true rather than having some visible marks of being true.

      Delete
    4. Oops, Perfors <- Performs, my typing-on-tablet skillz are clearly not very far advanced.

      Re Alex C's remark above, I think the unobservables actually have to be justified by their roles in yielding what I'd like to call 'typological' and 'projective' adequacy. Suppose somebody thinks it's a good technological decision to split their NP rule into 8 identical copies, one for each morphologically distinct GNC combination (somebody actually did this, in a book that I could probably locate if anybody wants the actual reference). This is clearly missing the generalizations about NP structure, but the grammar might be observationally indistinguishable from one that did get the generalizations.

      How do we know it's wrong? Because, typologically, languages never seem to show substantial structural differences between referring expressions with different GNC combinations, so, we can infer that projectively, whatever wierd things may be true in the input (objects being on the whole more complex than subjects, for example), the child has a strong bias to acquire one rule for complex referring expressions in all positions (wrinkles including pronominal clitics, certain instances of noun incorporation, and sometimes possessors, especially prenominal ones).

      Chomsky first used three words where a fussy person might of proposed more, but then the audience probably would have just ignored it to an even greater extent than they actually did, and then reduced the vocab to two in the presumably later Aspects, so it's not at all clear to me what the best terminology would actually be. I think on the whole I'd prefer using Descriptive Adequacy for getting the observable facts right, including the ones that have not yet been observed (and including meaning, register, appropriate conditions & pretty much everything that your phone would have to know in order to conduct your social interactions for you), but *not* what the significant generalizations are.

      Delete
    5. The significant generalizations would be discussed as something that you had to get right in order to attain typological and projective adequacy. The problem with focussing directly on the generalizations is that you can't really tell just by staring hard what they are, because it's not so straightforward how to distinguish a synchronically represented generalization from a diachronic fossil of a formerly represented generalization without dragging typology and learning into the picture. Quine and Suppes both had problems with this issue in papers they wrote in the 70s; Chris Peacocke and Martin Davis considered it from slightly different perspectives as 'level 1.5' in between Marr's levels 1 & 2; I think probably the easiest way to stay out of philosophical sinkholes is to treat it as an aspect of getting the more observable aspects of adequacy right.

      Delete
    6. Re Alex: DA requires considering two kinds of truth marks; OA and derbivability from an adequate FL/UG from PLD (L). So we measure DA on two dimensions. This, at least is how I read it.

      As for pairs or strings, I think it's pretty clear he intended the former. If not, he should have.

      Delete
    7. Re Alex C's comment: Yes but these quotes are all about the structural descriptions rather than semantics or interpretations per se. Which is related to the weak/strong adequacy distinction.

      Perhaps this means that I was just missing the real point of your initial question, but I agree that Aspects is entirely vague about what might count as semantics or interpretation. The only point I was trying to make was that, if there is a level of adequacy that can be described as "just getting the right set of such-and-such", then it seems that Chomsky always intended that the such-and-suchs are the kind of things that can be lined up one-to-one with string-meaning pairings, not with mere strings. (The generated objects that are lined up with string-meaning pairings might be pairings of a string with a formula of FOL, or a (DS,SS) pair of phrase-markers from which a string and a whatever-a-meaning-is can respectively be computed, or a single more minimalist-like phrase marker from which both are computed, etc.)

      So to my mind even the earliest writings did not define any role at all --- not even the role of something like observational adequacy, the kind of thing that's a way to get started but is not an end in itself --- to the goal of assigning +/- acceptable to strings. To the extent that Chomsky wants to introduce any level of adequacy that is extensional in this sense, the relevant extension seems to be a set of string-meaning pairs. I think there's potential for unnecessary confusion between (a) the step from dealing with sets of strings to dealing with sets of string-meaning pairs, and (b) the step from a first goal of getting the right set of somethings to a more intensional or more deliberately cognitively-oriented goal of getting the right generative mechanisms. I don't think any of the steps upwards in Chomsky's ladder of levels of adequacy (wherever it is exactly that we place the particular rungs) should be described as (a), because it's sets of string-meaning pairs all the way down.

      (In writing this I was reminded of this discussion about versions of the POS based on only strings and on string-meaning pairs.)

      Delete
    8. I am away from my books this week, so I can't take part in the exegetical discussion. But there are clearly several different views on DA, which does confirm Avery's point: "So, I think the levels are much more subtle and complex than people used to seem to think, which might be one reason why students don't read this stuff any more, I always found it very difficult."
      Me too.

      Delete
    9. Another exegetical point: back in the early sixties, people were very unsure about semantics, but seemed to think that direct intuitions about at least some aspects of sentence structure could be used as data, so substituting meaning for structural descriptions is probably a reasonable update to the text. Also adding info about sociolinguistic register and discourse role.

      Delete
    10. More exegesis: I think semantics was a hopeless mess until

      (a) Jerry Katz in 1971 (or was it 2) in his 'Semantic Theory' formulated an intelligible goal that linguists could pursue using their kinds of methods, namely, investigating what Larson and Segal (1995) called 'Logico-Semantic Properties and Relations' such as, especially entailment (which I would prefer to call 'Meaning-Based Properties and Relations', at least when trying to teach Aussie undergraduates about them).

      (b) Richard Montague, at about the same time, presented some mathematically solid methods for generating MBPRs, and presented them in a convincing way to his students and colleagues at UCLA. Quite a lot of people were thinking along similar lines, I believe, but Montague was firstest with the mostest.

      When Chomsky wrote the texts we have been discussing, semantics was in a hopeless state (in ch 1 of Katz's book, you can read him bewailing it, seems very strange relative to the current situation), so it's not too surprising that what he said about it then was a bit all over the place. Plus, to make things worse, model theory is not only a technical device for generating MBPRs, but also a possible but somewhat questionable theory of how language is connected to the world, so there's still a certain amount of confusion in the air.

      Delete
    11. I had an hour or two spare with my copy of Current Issues to hand and it is quite clear that the data he considers consists only of the strings of surface sounds, and not the sound/meaning pairs.
      For example:
      p. 8 On the basis of a limited experience with the data of speech.
      p. 28 footnote 1: the fact that a certain noise was produced ... does not guarantee that it is a well formed utterance.
      (This is the quote that David P linked to earlier)
      p 29. "to give an account of the primary data (e.g. the corpus)"
      and most explicitly in page 34:
      'Suppose that the sentences "John is easy to please" and "John is eager to please" are observed and accepted as well-formed. A grammar that attains only the level of observation adequacy would again merely note this fact in one way or another. To achieve the level of descriptive adequacy a grammar would have to assign structural descriptions... '

      The only reasonable interpretation of these is to say that
      a) observational adequacy is just getting the +/- wff distinction
      b) descriptive adequacy is assigning the right structural descriptions.
      So this is very close to the weak/strong adequacy or E/I-language distinction as David P says above. And no semantics except indirectly via SDs.

      Whether these are the right distinctions is of course the more interesting question.

      Delete
    12. Very interesting, it looks like I stand corrected: since there's clearly meant to be some role for sets of strings in Current Issues, and in Aspects it's vague at best, maybe the string-meaning pairs perspective did take a while to come along.

      Personally, then, my conclusion from this is that we should pretty much throw away at least the "observational adequacy" level (as Chomsky defined it).

      Delete
    13. Speaking as a mathematician, observational adequacy has some clarity that descriptive adequacy does not; so if DA implies OA, then OA still has some role, since if one can show that a class of grammars is NOT OA then it implies that that class is not DA.
      (e.g. Swiss German etc.)

      So I think that it still has a useful role even for linguists who follow the I-language good, E-language bad line.

      OA applied to the sound/meaning pairings is a more interesting concept though.

      Delete
    14. Fair enough, I'd agree with that: OA can be extremely useful in that sense.

      So I'll retreat a bit further. I suppose what I really should have said was: any usefulness we find for OA in the set-of-strings sense (rather than the equally arbitrary set-of-meanings sense), is due to a sort of accident of the kind of data that happens to be easily accessible, rather than any principled reason to think of it as a logical precursor to the set-of-pairs perspective. And I guess given this, I should probably grant that searching for OA grammars as a first step might be a practical thing to do; or at the very least, it would have been at the time when Chomsky was writing this stuff.

      Delete
  6. This comment has been removed by the author.

    ReplyDelete
  7. These are truly amongst the wonderful informative blogs.Grammarly

    ReplyDelete