tag:blogger.com,1999:blog-5275657281509261156.post2860276157248457991..comments2024-03-28T04:04:55.806-07:00Comments on Faculty of Language: Levels of Adequacy in Generative GrammarNorberthttp://www.blogger.com/profile/15701059232144474269noreply@blogger.comBlogger35125tag:blogger.com,1999:blog-5275657281509261156.post-14402003343373920092015-06-10T03:13:10.063-07:002015-06-10T03:13:10.063-07:00These are truly amongst the wonderful informative ...These are truly amongst the wonderful informative blogs.<a href="http://www.chrome.google.com/webstore/detail/grammarly-spell-checker-g/kbfnbcaeplbcioakkpcpgfkobkghlhen?hl=en" rel="nofollow">Grammarly</a>Kate Perryhttps://www.blogger.com/profile/02943963842606041438noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-62716423765454985292014-01-14T02:52:56.432-08:002014-01-14T02:52:56.432-08:00Having spent several (i.e. three) months staring a...Having spent several (i.e. three) months staring at the specific paragraphs in Aspects, my retina scars and I can here clarify that the three Levels of Adequacy were quite specifically mentioned as Enumerative, Descriptive, and Explanatory.<br /><br />If departure from the canon is permitted in a retrospective such as this, in my view at the time, it became possible from a Philosophy of Science / Khunian framework to read Chomsky's characterization at surface structure as indexing the state of the Linguistic Art across a diachronic transition from Logical Positivism ("tag and bag" enumeration) through Empirical Descriptivism ("emergent properties" analyses, replete with informant-specific categories and similar burdensome "description' mechanisms) to what today might be called algorithmic/programmatic 'explanation' (essentially Post grammar [1936] , where recursion, rewrite rules and the notion of a Linguistically Significant Generalization, etc.were drivers), in essence starting the generative tradition.<br /><br />All of this to my eyes seemed at surface a specific message for his fellow practitioners - word lists and paired sentences certainly have their place in language description, but one should in any case aim for explanation, <br /><br />For those still reading this, it was the deeper structure rather than the levels inventory outlined above -- specifically the discussion (at the bottom of an ensuing even numbered page [the number of which has fortunately faded from recollection!]) of Strong Generative Adequacy, which to this day I consider a most definitive characterization of computational parsimony that had me captivated for so many months. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-54799898346127279972014-01-14T02:48:56.615-08:002014-01-14T02:48:56.615-08:00This comment has been removed by the author.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-66758665274319318142013-09-06T07:42:51.871-07:002013-09-06T07:42:51.871-07:00Fair enough, I'd agree with that: OA can be ex...Fair enough, I'd agree with that: OA can be extremely useful in that sense.<br /><br />So I'll retreat a bit further. I suppose what I really should have said was: any usefulness we find for OA in the set-of-strings sense (rather than the equally arbitrary set-of-meanings sense), is due to a sort of accident of the kind of data that happens to be easily accessible, rather than any principled reason to think of it as a logical precursor to the set-of-pairs perspective. And I guess given this, I should probably grant that searching for OA grammars as a first step might be a practical thing to do; or at the very least, it would have been at the time when Chomsky was writing this stuff.Tim Hunterhttps://www.blogger.com/profile/11810503425508055407noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-11134721098051336692013-09-06T04:25:31.716-07:002013-09-06T04:25:31.716-07:00Speaking as a mathematician, observational adequac...Speaking as a mathematician, observational adequacy has some clarity that descriptive adequacy does not; so if DA implies OA, then OA still has some role, since if one can show that a class of grammars is NOT OA then it implies that that class is not DA.<br />(e.g. Swiss German etc.)<br /><br />So I think that it still has a useful role even for linguists who follow the I-language good, E-language bad line.<br /><br />OA applied to the sound/meaning pairings is a more interesting concept though.Alex Clarkhttps://www.blogger.com/profile/04634767958690153584noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-787413424802862942013-09-05T08:54:43.297-07:002013-09-05T08:54:43.297-07:00Very interesting, it looks like I stand corrected:...Very interesting, it looks like I stand corrected: since there's clearly meant to be some role for sets of strings in <i>Current Issues</i>, and in <i>Aspects</i> it's vague at best, maybe the string-meaning pairs perspective did take a while to come along.<br /><br />Personally, then, my conclusion from this is that we should pretty much throw away <b>at least</b> the "observational adequacy" level (as Chomsky defined it).Tim Hunterhttps://www.blogger.com/profile/11810503425508055407noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-52545308764534195872013-08-11T12:23:48.625-07:002013-08-11T12:23:48.625-07:00I had an hour or two spare with my copy of Current...I had an hour or two spare with my copy of Current Issues to hand and it is quite clear that the data he considers consists only of the strings of surface sounds, and not the sound/meaning pairs.<br />For example: <br />p. 8 On the basis of a limited experience with the data of speech.<br />p. 28 footnote 1: the fact that a certain noise was produced ... does not guarantee that it is a well formed utterance.<br />(This is the quote that David P linked to earlier)<br />p 29. "to give an account of the primary data (e.g. the corpus)"<br />and most explicitly in page 34:<br />'Suppose that the sentences "John is easy to please" and "John is eager to please" are observed and accepted as well-formed. A grammar that attains only the level of observation adequacy would again merely note this fact in one way or another. To achieve the level of descriptive adequacy a grammar would have to assign structural descriptions... '<br /><br />The only reasonable interpretation of these is to say that <br />a) observational adequacy is just getting the +/- wff distinction<br />b) descriptive adequacy is assigning the right structural descriptions.<br />So this is very close to the weak/strong adequacy or E/I-language distinction as David P says above. And no semantics except indirectly via SDs.<br /><br />Whether these are the right distinctions is of course the more interesting question. <br />Alex Clarkhttps://www.blogger.com/profile/04634767958690153584noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-32480463150306439992013-07-17T18:32:00.965-07:002013-07-17T18:32:00.965-07:00@Kyle: I wasn't making a general claim that ph...@Kyle: I wasn't making a general claim that phonology is parasitic on orthography, but rather that some celebrated cases that loomed very large in the minds of people in the 60s and 70s were not as clear cut as often represented at the time.<br /><br />And the general question of how much of the appearance of interleaving is actually due to the successive accumulation of sound-changes + conservatism still seems to me to be a live issue that people have not managed to completely defuse. Such as forex the correct analysis of u-umlaut in Modern Icelandic: is the conditioning phonological as analysed by Steve Anderson in his thesis, or morphological (as the Icelanders seemed to think the last time I talked about it with them, which was admittedly a rather long time ago).<br /><br />Of course a lot of work in phonology has for a long time been going to areas where this kind of question doesn't arise at all. Clements' tone shift in Kikuyu discussion, for example, cannot be derailed by dragging in history, because the autosegmental analysis he's arguing for is exactly what seems to be needed to make the change he's discussing acceptably probable.<br /><br />Disjunctive triggering of processes is yet another issue; if this is happening suspiciously often, there needs to be some story about why, but it doesn't necessarily have to be one that hinges on the grammar representation; there are plenty of physical and functional possibilities that need to be considered too.AveryAndrewshttps://www.blogger.com/profile/17701162517596420514noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-63008422214571912432013-07-17T17:31:36.812-07:002013-07-17T17:31:36.812-07:00More exegesis: I think semantics was a hopeless me...More exegesis: I think semantics was a hopeless mess until<br /><br />(a) Jerry Katz in 1971 (or was it 2) in his 'Semantic Theory' formulated an intelligible goal that linguists could pursue using their kinds of methods, namely, investigating what Larson and Segal (1995) called 'Logico-Semantic Properties and Relations' such as, especially entailment (which I would prefer to call 'Meaning-Based Properties and Relations', at least when trying to teach Aussie undergraduates about them).<br /><br />(b) Richard Montague, at about the same time, presented some mathematically solid methods for generating MBPRs, and presented them in a convincing way to his students and colleagues at UCLA. Quite a lot of people were thinking along similar lines, I believe, but Montague was firstest with the mostest.<br /><br />When Chomsky wrote the texts we have been discussing, semantics was in a hopeless state (in ch 1 of Katz's book, you can read him bewailing it, seems very strange relative to the current situation), so it's not too surprising that what he said about it then was a bit all over the place. Plus, to make things worse, model theory is not only a technical device for generating MBPRs, but also a possible but somewhat questionable theory of how language is connected to the world, so there's still a certain amount of confusion in the air.AveryAndrewshttps://www.blogger.com/profile/17701162517596420514noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-78766117928202993952013-07-17T17:17:26.022-07:002013-07-17T17:17:26.022-07:00I think the/a Bayesian formulation might be the be...I think the/a Bayesian formulation might be the best way to make generative grammar intelligible and acceptable to a wider range of people than it has tended to be; the way I'd attempt to put it right now (being very new to the ideas, and not understanding much about the math (yet, or possibly ever)) is as follows:<br /><br />First, Bayesian language learners need to have an explicit formulation of a hypothesis space (aka grammar notation) and a prior over them (evaluation metric). Also a notion of fit of grammar to data that is more sophisticated what Chomsky proposed, but we ignore that for a moment.<br /><br />Next, we want our total account of language to be the one that assigns the highest probability to the way things are (facts of typology, change, learning data, whatever), relative to the currently known and well-formulated alternative accounts, and ones where the grammar representation can capture the 'obviously significant generalizations' such as those of noun phrase structure are clearly going to do better than those that don't (e.g. finite state grammars).<br /><br />I think this stands up and is valid even if you don't understand much of anything about how learning happens (your theory seems to be too permissive), or if you don't know how to make your favorite theory of grammar probablistic in the right way, so that it produces a data set with a given probability (P(D|G)).<br /><br />Although defining P(D|G) for linguistic theories of the kinds that linguists actually use seems to be a very very hard problem, I think it might be possible to put off actually solving it for a while (not forever) by the following trick (cheap and cheerful if you like it, cheap and nasty if you don't).<br /><br />Consider not the probability of the entire data set, but rather the probability of the utterances as expressions of their meanings (P(U1|M1)* ... *P(Un|Mn)), and estimate P(Ui|Mi) as the reciprocal of the number of different U's that your (non-probabilistic) grammar provides to express M (at first, you'd probably want to equate the meaning of an utterance with its 'referential content', or 'cognitive content', as George Lakoff used to call it, treating the discourse variants as optional variants).<br /><br />The above product is then the 'fit' term for Bayesian grammar determination (this is very close to an actually grammar debugging technique used in the XLE community, whereby, having parsed a sentence, you run the generator on its f-structure, and tune the grammar to reduce the amount of wrong c-structures you get).AveryAndrewshttps://www.blogger.com/profile/17701162517596420514noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-35735320592484560722013-07-17T15:54:16.001-07:002013-07-17T15:54:16.001-07:00Avery: I will try to keep this short so as to not ...Avery: I will try to keep this short so as to not derail the thread. There are many differences between syntax and phonology. One of the most profound, IMO, is the following. There is not a shred of evidence for syntactic displacement that is not structure-dependent. But descriptive grammars of real languages are chock full of phonological processes in which a disjunction of natural classes (as currently conceived) either undergo or trigger change (the SPE laxing rule being a good example). Compared to syntax, where poverty-of-the-stimulus arguments of the sort articulated by Chomsky strengthen the case for "structure", there are also very few attempts to make arguments of this type re: phonological features. As Jeff Mielke points out in his book (also OSU dissertation), the innateness of phonological features seems to have been a belief of Jakobson and taken up uncritically by his students.<br /><br />Jaeger might be right that there is no rule relating [aw] and [ʌ] (I have no dogs in that race), but her suggestion that phonology is parasitic on orthography is patently absurd when the world is full of hundreds of millions of illiterate adults who speak a language with dozens of interleaved morphophonological processes which they learned without any explicit instruction.Kyle Gormanhttps://www.blogger.com/profile/09356338967325373966noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-77627059338158101152013-07-17T00:48:14.471-07:002013-07-17T00:48:14.471-07:00That is an interesting argument -- I always took t...That is an interesting argument -- I always took the argument against redundancy in grammars to be a learnability one -- namely big grammars are harder to learn than small grammars (which isn't a good argument). But you are right that there is a good typological one -- in Bayesian terms the prior probability of a grammar with 8 copies of the same rule is low, and in common sense terms if we have a grammar like that we need some sort of explanation of why we have the same rule type reoccurring, and one of the only good ones is that it is just one rule that occurs in 8 places.<br /><br />(I realize now that I am just paraphrasing your argument without adding anything new ... )<br />Alex Clarkhttps://www.blogger.com/profile/04634767958690153584noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-2150272570173328392013-07-16T23:54:26.535-07:002013-07-16T23:54:26.535-07:00The problem with rejecting the 8 copies of the NP ...The problem with rejecting the 8 copies of the NP rule idea out of hand, instinctively, is that there are messier cases that arise, especially in phonology. So for example once upon a time in the history of English, the tense-lax alternations that are diachronically the source of alternations such as serene-serenity, divine-divinity, etc were simple changes in a single feature, obviously correctly describable as a single rule (trisyllabic laxing, iirc). Without a rule-and-representation format whereby one rule applies naturally to all the relevant vowels, we cannot, for example, understand why the same change came to affect all these vowels at more or less the same time.<br /><br />However, after the great vowel shift, the phonetic naturalness of the rule disintegrates, and it becomes much less clear whether there is a single rule applying to all the affected vowel qualities, which requires a rather abstract analysis to make it work (as presented in SPE), or whether the internally represented system is messier. Jeri Jaeger in her PhD thesis found that the alternations supported by spelling were applied much more productively than the one that wasn't (profound/profundity), which suggests a certain amount of mess to me, at a minimum not having the same rule for divine and profound.<br /><br />So, I think we need some explicit thinking, which is, at bottom, typological/diachronic (and statistical: what is the joint probability of 8 versions of the German NP structure arising more or less simultaneously? Rather low, I would imagine).<br /><br />When I talked about this once upon a time with Patrick Suppes (in the late nineties), who had proposed an analysis of English where quantifers were external to their NPs, just placed next to them by the PS rules in all the places where the NPs occurred (because this made the semantic easier), he justified himself by observing that the amount of generalization captured by the linguistically 'correct' formulations as opposed to the 'incorrect' ones was very small change indeed compared to what he perceived in the more established natural sciences; this is I think true to some extent for English alone, but not when you're generalizing over thousands of languages and also documented diachronic changes. (I don't think I was fast enough on my feet at the time to bring this up then).<br />AveryAndrewshttps://www.blogger.com/profile/17701162517596420514noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-59914001159627763702013-07-16T23:03:09.863-07:002013-07-16T23:03:09.863-07:00thanks for the additional clarification. I'm s...thanks for the additional clarification. I'm still a little bit puzzled, though, as to how an observationally adequate grammar could manage to "correctly" lay out the facts of an individual language if it went against the structure of UG.<br />Take Avery's nice example about NP-structure --- arguably, assuming eight distinct NP-rules is not only unsatisfactory once you take into account additional languages, it shouldn't count as _correctly_ representing the data even for that single language? <br /><br />(Just to make sure this gets across, I'm not at all opposed to taking what you call "descriptively adequate" as something we ought to aim for in coming up with grammars for particular languages, I just have a hard time imagining what an observationally adequate grammar that failed to be descriptively adequate would look like. So rather than to any substantial disagreement, I think it might really just boil down to the question of what we'd like to call what.)benjamin.boerschingerhttps://www.blogger.com/profile/00894608988488218285noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-23717116526499192222013-07-16T17:05:15.912-07:002013-07-16T17:05:15.912-07:00To distinguish OA from DA G we need to consider th...To distinguish OA from DA G we need to consider the structure of UG. Yes. Does it need to have it already? No, any more than we need ALL the data to make evaluations. Rather the process of arguing for a DA G commits hostages to views of UG, at least implicitly and this is good. All that I want noted is that BOTH kinds of considerations are important and relevant.Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-49953250084463290672013-07-16T15:33:02.016-07:002013-07-16T15:33:02.016-07:00Another exegetical point: back in the early sixtie...Another exegetical point: back in the early sixties, people were very unsure about semantics, but seemed to think that direct intuitions about at least some aspects of sentence structure could be used as data, so substituting meaning for structural descriptions is probably a reasonable update to the text. Also adding info about sociolinguistic register and discourse role.Avery Andrewshttps://www.blogger.com/profile/00034668191422056722noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-46812504961380934032013-07-16T00:44:44.953-07:002013-07-16T00:44:44.953-07:00I am away from my books this week, so I can't ...I am away from my books this week, so I can't take part in the exegetical discussion. But there are clearly several different views on DA, which does confirm Avery's point: "So, I think the levels are much more subtle and complex than people used to seem to think, which might be one reason why students don't read this stuff any more, I always found it very difficult." <br />Me too.<br />Alex Clarkhttps://www.blogger.com/profile/04634767958690153584noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-67999850054924533102013-07-15T21:58:01.644-07:002013-07-15T21:58:01.644-07:00Re Alex C's comment: Yes but these quotes are ...Re Alex C's comment: <i>Yes but these quotes are all about the structural descriptions rather than semantics or interpretations per se. Which is related to the weak/strong adequacy distinction.</i><br /><br />Perhaps this means that I was just missing the real point of your initial question, but I agree that Aspects is entirely vague about what might count as semantics or interpretation. The only point I was trying to make was that, if there is a level of adequacy that can be described as "just getting the right set of such-and-such", then it seems that Chomsky always intended that the such-and-suchs are the kind of things that can be lined up one-to-one with string-meaning pairings, not with mere strings. (The generated objects that are lined up with string-meaning pairings might be pairings of a string with a formula of FOL, or a (DS,SS) pair of phrase-markers from which a string and a whatever-a-meaning-is can respectively be computed, or a single more minimalist-like phrase marker from which both are computed, etc.)<br /><br />So to my mind even the earliest writings did not define any role <b>at all</b> --- not even the role of something like observational adequacy, the kind of thing that's a way to get started but is not an end in itself --- to the goal of assigning +/- acceptable to strings. To the extent that Chomsky wants to introduce any level of adequacy that is extensional in this sense, the relevant extension seems to be a set of string-meaning pairs. I think there's potential for unnecessary confusion between (a) the step from dealing with sets of strings to dealing with sets of string-meaning pairs, and (b) the step from a first goal of getting the right set of somethings to a more intensional or more deliberately cognitively-oriented goal of getting the right generative mechanisms. I don't think any of the steps upwards in Chomsky's ladder of levels of adequacy (wherever it is exactly that we place the particular rungs) should be described as (a), because it's sets of string-meaning pairs all the way down.<br /><br />(In writing this I was reminded of <a href="http://facultyoflanguage.blogspot.com/2013/02/theres-no-there-there.html?showComment=1360165232056#c2291056176683397658" rel="nofollow">this discussion</a> about versions of the POS based on only strings and on string-meaning pairs.)<br />Tim Hunterhttps://www.blogger.com/profile/11810503425508055407noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-51833235128089472242013-07-15T18:27:33.278-07:002013-07-15T18:27:33.278-07:00Re Alex: DA requires considering two kinds of trut...Re Alex: DA requires considering two kinds of truth marks; OA and derbivability from an adequate FL/UG from PLD (L). So we measure DA on two dimensions. This, at least is how I read it. <br /><br />As for pairs or strings, I think it's pretty clear he intended the former. If not, he should have.Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-36450467648221051112013-07-15T17:21:14.497-07:002013-07-15T17:21:14.497-07:00The significant generalizations would be discussed...The significant generalizations would be discussed as something that you had to get right in order to attain typological and projective adequacy. The problem with focussing directly on the generalizations is that you can't really tell just by staring hard what they are, because it's not so straightforward how to distinguish a synchronically represented generalization from a diachronic fossil of a formerly represented generalization without dragging typology and learning into the picture. Quine and Suppes both had problems with this issue in papers they wrote in the 70s; Chris Peacocke and Martin Davis considered it from slightly different perspectives as 'level 1.5' in between Marr's levels 1 & 2; I think probably the easiest way to stay out of philosophical sinkholes is to treat it as an aspect of getting the more observable aspects of adequacy right.<br />AveryAndrewshttps://www.blogger.com/profile/17701162517596420514noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-67601158766039971432013-07-15T17:14:14.715-07:002013-07-15T17:14:14.715-07:00Oops, Perfors <- Performs, my typing-on-tablet ...Oops, Perfors <- Performs, my typing-on-tablet skillz are clearly not very far advanced.<br /><br />Re Alex C's remark above, I think the unobservables actually have to be justified by their roles in yielding what I'd like to call 'typological' and 'projective' adequacy. Suppose somebody thinks it's a good technological decision to split their NP rule into 8 identical copies, one for each morphologically distinct GNC combination (somebody actually did this, in a book that I could probably locate if anybody wants the actual reference). This is clearly missing the generalizations about NP structure, but the grammar might be observationally indistinguishable from one that did get the generalizations.<br /><br />How do we know it's wrong? Because, typologically, languages never seem to show substantial structural differences between referring expressions with different GNC combinations, so, we can infer that projectively, whatever wierd things may be true in the input (objects being on the whole more complex than subjects, for example), the child has a strong bias to acquire one rule for complex referring expressions in all positions (wrinkles including pronominal clitics, certain instances of noun incorporation, and sometimes possessors, especially prenominal ones).<br /><br />Chomsky first used three words where a fussy person might of proposed more, but then the audience probably would have just ignored it to an even greater extent than they actually did, and then reduced the vocab to two in the presumably later Aspects, so it's not at all clear to me what the best terminology would actually be. I think on the whole I'd prefer using Descriptive Adequacy for getting the observable facts right, including the ones that have not yet been observed (and including meaning, register, appropriate conditions & pretty much everything that your phone would have to know in order to conduct your social interactions for you), but *not* what the significant generalizations are.AveryAndrewshttps://www.blogger.com/profile/17701162517596420514noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-45050580242780533582013-07-15T17:02:32.149-07:002013-07-15T17:02:32.149-07:00Thanks, David and Norbert, for the additional comm...Thanks, David and Norbert, for the additional comments. <br />I like the idea that identifying "proper" generalizations is what might separate a "merely" observationally adequate from a descriptively adequate grammar (although I'm not sure Norbert actually endorses this?).<br /><br />I still find it hard to see, though, how a grammar could "present the observed primary data correctly" even if it doesn't give "a correct account of the linguistic intuition of the native speaker". Isn't the native speaker's intuition our yardstick for what counts as a correct presentation of the primary data?<br /><br />What I could imagine is capitalizing on "observed" here and, for example, say that a finite list (or rather, a "grammarized" list) of accurate structural descriptions of some specific set of utterances (say, for lack of a better example, the Penn Treebank, ignoring whether or not it actually provides proper descriptions for its English sentences) could be observationally adequate, in so far as it matches native speakers' intuitions with respect to _these data_. But this grammar would remain utterly silent with respect to anything not in the list, thus failing to provide a proper account of (all of) the linguistic intuition of the native speaker which, of course, applies to a much larger (if not infinite) number of utterances.<br /><br />But this seems like a rather artificial case, hardly worth mentioning (which, of course, suggests that I got it wrong).<br /><br />With respect to Norbert's idea that being acquirable through UG is the hallmark, I fail to see that in any of the quotes given (which is, of course, not a deep point --- nothing is more boring than arguing over the exact words somebody put down a couple of years ago). It's certainly an interesting interpretation but, perhaps like Alex, it makes me worry that then, for us to distinguish an observationally from a descriptively adequate grammar we would need an explanatorily adequate theory of UG already.benjamin.boerschingerhttps://www.blogger.com/profile/00894608988488218285noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-27685253850403445282013-07-15T13:37:07.946-07:002013-07-15T13:37:07.946-07:00Yes but these quotes are all about the structural ...Yes but these quotes are all about the structural descriptions rather than semantics or interpretations per se. Which is related to the weak/strong adequacy distinction.<br />And while one could plausibly say that meanings are observable (with some caveats) one can't say that the structural descriptions are observable, so this has to be part of descriptive adequacy rather than observational adequacy.<br />So one way of thinking of the OA/DA is distinction is as parallel to the weak/strong distinction. So on this interpretation (which I am not convinced is the right one), OA is just the boring +/- acceptable, and DA is getting the structures right. <br /><br />In terms of Norbert's useful terminology from the previous post, levels of adequacy are levels of having "marks of truth" rather than levels of being true. <br />So DA seems an outlier if it actually about being true rather than having some visible marks of being true. Alex Clarkhttps://www.blogger.com/profile/04634767958690153584noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-20511128998970275622013-07-15T11:26:36.547-07:002013-07-15T11:26:36.547-07:00My understanding is that Chomsky advocated renounc...My understanding is that Chomsky advocated renouncing for a while the goal of finding the correct grammar in favor of just recognizing the best of some alternatives, because it is easier. This seem sensible, and some Bayesian work still follows this approach, such as Amy Performs on structure-dependance, if I understood it at all. Just a tactical decision.Avery Andrewshttps://www.blogger.com/profile/00034668191422056722noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-56305481438390456802013-07-15T10:24:10.402-07:002013-07-15T10:24:10.402-07:00In these early discussions it seems that Chomsky i...<i>In these early discussions it seems that Chomsky is still not incorporating semantics directly into the data to be covered. Is that right?</i><br /><br />There are a few places in Aspects ch.1 where it does seem that the interpretations of sentences are part of the data to be covered:<br /><br />"A fully adequate grammar must assign to each of an infinite range of sentences a structural description indicating how this sentence is understood by the ideal speaker-hearer" (pp.4-5)<br /><br />"The syntactic component specifies an infinite set of abstract formal objects, each of which incorporates all information relevant to a single interpretation of a particular sentence." (p.16)<br /><br />"The phonological component of a grammar determines the phonetic form of a sentence generated by the syntactic rules. That is, it relates a structure generated by the syntactic component to a phonetically represented signal. The semantic component determines the semantic interpretation of a sentence. That is, it relates a structure generated by the syntactic component to a certain semantic representation." (p.16)<br /><br />"What is claimed, then, is that when given scattered examples from (16), the language learner will construct the rule (15) generating the full set with their semantic interpretations" (p.44)<br /><br />I agree that the sound-plus-meaning point of view suggested by these excerpts is not very clearly connected to notions like observational adequacy and descriptive adequacy, and therefore not very clearly connected to what Chomsky takes to be the data to be covered. Also, terminology like "generating the full set [of sentences] with their semantic interpretations" is unfortunately perhaps a bit vague; I <b>think</b> it is fair to paraphrase this as "generating the full set of sound-meaning pairs", though some may disagree. Finally, I find that there's also some vagueness about the role of a structural description, i.e. as an individuator of interpretations or as machinery for tracking state that's relevant for subsequent derivational syntactic operations (in practice, they seemingly do both, but the quote seems to suggest that it's all about individuating interpretations).<br /><br />But to the extent that the two options on the table are:<br />(a) that observational adequacy means getting the right set of sounds/strings (i.e. the image of the set of well-formed syntactic objects, under the interpretation function defined by the phonological component), and<br />(b) that observational adequacy means getting the right set of pairings of a sound/string with a meaning (i.e. the image under the pair of interpretation functions),<br />then even in Aspects it seems to me that the latter view is probably what was intended. The third quote above, in particular, seems to suggest this more "symmetrical" picture, where the two interpretations are on an equal footing.<br />Tim Hunterhttps://www.blogger.com/profile/11810503425508055407noreply@blogger.com