Wednesday, November 23, 2016

Some material to point to when the uninformed say that GG is dead, which it isn't

Here are several pieces by our own estimable Jeff Lidz that fight the good fight against the forces of darkness and ignorance.  We need much more of this. We need to get stuff into popular venues defending the work that we have done.[1]

The most important is this piece in Scientific American rebutting the profoundly ignorant and pernicious piece by Ibbotson and Tomasello (I&T). (see here and here and here for longer discussion). Jeff does an excellent job of pointing out the issues and debunking the “arguments” that I&T advance. It is amazing, IMO, that T’s views on these issues still garner any attention. They no doubt arise from the fact that he has done good work on non-linguistic topics. However, his criticisms of GG are both of long-standing and very low quality and have been so for as long as they have been standing.  So, it is good to see the Sci Am has finally opened up its pages to those willing to call junk junk. Read it and pass it around widely in your intellectual community.

Here are two other pieces (here and here). The latter is a response to this. This all appears in PNAS. The articles are co-atuhored with Chung-hye Han and Julien Musolino. The discussion is an accessible entry into the big issues GG broaches for the scientifically literate non GGer. As such, excellent for publicity purposes.

So, read and disseminate widely. It is important to call out the idiocy out there. It is even fun.



[1] I have a piece in Current Affairs with Nathan Robinson that I will link to when it is available on the web.

44 comments:

  1. "this piece" link is wrong. Thanks a lot for this very nice blog.

    ReplyDelete
    Replies
    1. The link is down at Sci Am for some reason. I will try to get it later.

      Delete
  2. Server problem. They said it won't be fixed until Friday (11/25).

    ReplyDelete
  3. I was able to read Lidz's post via Google Cache (try this.)

    I've read both I&T's article and Katz's work. By now I should be accustomed to the level of vituperation in linguistics debates, but I'm not. It's reminiscent of the approach one sees in the humanities, particularly philosophy, where facts are thin on the ground and rhetoric carries the day. So you get Chomsky deriding challenges to his ideas as irrelevant or perhaps not even wrong (in a different context, Christopher Hitchens referred to Chomsky's work as "increasingly robotic"), and you get I&T all but wishing for Chomsky to drop dead.

    In the sciences, by contrast, the focus is on theory and data, and the rhetoric gets edited out. There are exceptions--people whose work is challenged by replication failures, or, in my own field of neuroscience, Larry Squire clinging to the standard model of memory consolidation. These are rare and isolated, though, whereas in linguistics the problem seems endemic. It makes linguistics stand out, and not in a good way; it's one of the reasons that I minimize coverage of this material in my own classes.

    That's not the only reason, though. Ideas like UG, the LAD, Poverty of the Stimulus, etc., seem to have a different character from other ideas in the field. So to return to memory, we can cover different theories of memory and the experiments on which they are based, show how later experiments led to the revision or replacement of those theories, and propose experiments that put these new theories to the test. For the ideas from linguistics, it's a different story: So some aspect of language acquisition is innate? How do you know? Kids learn things that they couldn't possibly have learned from the stimuli they receive? How do you know it's *impossible* (a very strong claim)? What exactly is innate--what is its character, how can you identify it, how is it encoded in the genome or the brain? What sort of experiment would distinguish a general learning mechanism from a domain-specific one? (I can't access the PNAS articles from where I am, but if the argument really is "The kids differ from their parents, so they must be supplying something that they can't get anywhere else," then I'm unimpressed.) Beyond the rocks-and-kittens business, it's thin. (See Ewa Dabrowska's paper here or Scholz and Pullum's work on stimulus poverty arguments.)

    Not that I&T have a coherent alternative; the notion of analogy is pretty vague too. So I am left with vagueness all around, and am generally content to focus on empirical work related to language acquisition and language impairment. It's a shame, but there it is.

    ReplyDelete
    Replies
    1. I agree. I don't see the point of lengthy debates about innate vs. experience without specific theoretical cards on the table.

      However, Chomsky has always been keen on proposing very specific theories of generative grammar. He just hasn't tackled the questions of online sentence processing or acquisition or neuroscience.

      Delete
    2. I have a hard time seeing Chomsky's theories as "very specific." I suppose P&P tried to be, as did its predecessors, but they didn't really work out; the Minimalist Program isn't very specific at all.

      Similarly, I have a hard time agreeing that Chomsky "hasn't tackled the question[] of...acquisition." What were the LAD, UG/FL, speculations about stimulus poverty, etc., if not that?

      The same problem crops up in Lidz's post--he claims both that "Chomsky has never offered a theory of language acquisition" and then, later, that Chomsky's ultimate goal was to explain "one critical component of a theory of language acquisition." Maybe there's some disagreement about whether I&T are criticizing a theory or a component of a theory, but that seems like a side issue.

      Delete
    3. RE: acquisition, I think that the LAD, UG, poverty of stimulus etc. were observations about the challenges learners face and how to develop a theory of grammar. However, he did not have a theory of how language actually develops - e.g., characterize how the parameters are set and the intermediate grammars that children acquire before they attain an adult grammar. So Generative Grammar has also identified acquisition as a central problem but has not focused specifically on what happens at each state of acquisition, but rather the beginning and end states of grammatical knowledge.

      I think the theories have been incredibly specific - Tomasello are radically unspecific by comparison, as far as I can tell.

      Delete
    4. I don't think I would use the word "observations" to describe the LAD, etc.; they're more like postulates. Anyway, I don't want to get too far into the terminological weeds. These ideas are part of a theory of language acquisition, touch directly upon such a theory, because if you describe the beginning and end states you're necessarily saying something about what can happen in the middle. It is a shame that this literature and the empirical literature on language acquisition don't get along better.

      Merge, at least, has never seemed particularly specific to me. I don't even think the basic idea is specific to language. Tomasello's idea of analogy or whatever just seems like handwaving, though he does have much more interesting data on acquisition.

      Delete
    5. I agree about the terminological weeds. What I mean to say is that nobody within the tradition of Generative Grammar appears to have created a theory of language acquisition in a similar vein as Piaget - characterizing the cognitive stages of development, which explain the range of data that occurs at various ages. I think that Tomasello and others have proposed theories to deal with this kind of data, although I really don't know this literature.

      Have you read the Minimalist Program? Merge is quite specific in terms of its technical details and how it interacts wit the lexicon to generate structures. I think Chomsky is quite happy to reduce as much language-specific machinery to domain-general machinery as possible; this is one of the primary goals of the Minimalist Program.

      Delete
    6. William: I think you need to educate yourself a little more about language acquisition--in both generative and non generative frameworks--so we are not lost in the terminological weeds. I don't know how you can assert that "Tomasello and others have proposed theories to deal with this kind of data" without knowing this literature. Nor do I know where you find the basis for your claim that "nobody within the tradition of Generative Grammar appears to have created a theory of language acquisition."

      Delete
    7. I am somewhat familiar with the literature: Tomasello has brought up the difference in linguistic capacities between children of different ages (Tomasello, 2000, TICS), and attributed this to the acquisition of idiosyncratic "constructions" followed by an elaboration of grammar. E.g., English-learning children apparently use "a" and "the" with almost totally non-overlapping sets of nouns until age 3. I totally disagree with his specific conclusions, but he's dealing with real data.

      Please educate me about who "within the tradition of Generative Grammar appears to have created a theory of language acquisition." Let me clarify what I mean by this sentence: I mean a specific theory that posits different stages of grammatical development that fits within a GG framework, that correctly explains the data that Tomasello and others are dealing with; that is, differences in linguistic capacities at different ages. I do not think Fodor, Sakas et al. have done this - they seem to have pointed out which parameters are "setable" given distributions of data, but I do not see in their work a theory of what is happening inside a child's head at various stages that explains the data that developmental psychologists are seeing. I am listening and am very anxious to hear the GG theory that does this.

      Delete
    8. @William: What Charles is too modest to say is that his 2002 book (and much subsequent work, including his excellent 2007 paper with Julie Legate in Language Acquisition) provides an explicit model of the relation between (a) an explicit hypothesis space about possible grammars, (b) the input children receive and (c) the relative age of acquisition of various grammatical features cross linguistically. Other relevant work is reviewed in Lidz and Gagliardi 2015. See also the long list of researchers listed in this post: http://facultyoflanguage.blogspot.com/2016/09/the-generative-death-march-part-2.html
      who link specific ideas of generative grammar with specific phenomena in language acquisition.

      Delete
    9. Thank you for the references - I am still becoming exposed to this literature, and there seems to be very good work contained within it. However, it does not address my fundamental question, which is admittedly a hard one.

      Has anyone published a theory of the acquisition of e.g. English grammar within the P&P tradition? I.e., what does the grammar of an English-learning child look like at each stage. I mean a detailed theory. As in, what is the initial state, what is each intermediate state, and what is the final state. With respect to parameter values being incrementally set. Only then can we step back and ask whether this specific theory accurately captures the linguistic data at each age, and whether this theory explains many other pieces of data.

      E.g., Tomasello (2000) reviews data that English-learning children between 2-3 use the determiners "a" and "the" with nearly non-overlapping sets of nouns (Pine & Lieven, 1997). I currently do not see how GB/P&P as stated deals with data like this. There are three possible responses to this (and similar) data point(s):

      (1) dispute the data
      (2) point out how GB/P&P explains these data
      (3) acknowledge that GB/P&P does not explain these data

      Which is it? I can imagine that GB/P&P could be enriched to allow subcategorization frames to be specific to some lexical item, and then general to a category. But then this development has to be specified in grammar, and it currently is not.

      Indeed, this sort of question raises many more questions - how are subcateorization frames learned? Mechanistically, that is - not what data can be used to learn them, but what actually happens in the grammar during development that creates a subcategorization frame? This seems like a question that GB/P&P is ill-equipped to answer. Correct me if I am wrong.

      Delete
    10. @William: Our theory of grammar certainly does "allow subcategorization frames to be specific to some lexical item" – we call that c-selection. It's necessary in order to explain why, in English, you depend on rather than *depend from (though in other languages, you do the latter). It's necessary to explain why causative v cannot select arrive (cf. *John arrived Mary). And, closer to the domain you were talking about, it's necessary to explain why a can combine with count nouns but not mass nouns. (Note that the distinction is linguistically proprietary: good luck explaining what property of grains and rice in the real world predicts a grain vs. *a rice, or what it is about furniture in the real world that predicts *a furniture).

      To be clear, I'm willing to bet that this has nothing to do with the Tomasello data you're citing, because I'm betting that those same children have no problem comprehending sentences in which the "wrong" (from the perspective of their production) determiner is paired with a noun. If so, then whatever the explanation for this pattern is, their grammar ain't it.

      Delete
    11. Assuming the data are relevant, then we need a theory that moves from c-selection to selection of a specific syntactic category. Now we're cooking with gas - this sounds to me a lot like a generative grammar version of what Tomasello and friends are talking about. But the move from c-selection to selection of a broader category isn't encoded in P&P, is it?

      Delete
    12. @ Omer

      "To be clear, I'm willing to bet that this has nothing to do with the Tomasello data you're citing, because I'm betting that those same children have no problem comprehending sentences in which the "wrong" (from the perspective of their production) determiner is paired with a noun. If so, then whatever the explanation for this pattern is, their grammar ain't it."

      I think this is a complicated inference. I have many good friends that are non-native speakers of English. I understand perfectly fine when they make agreement errors or use the wrong gender. Does that mean that my grammar incorporates those errors? Clearly we need to dissociate "the ability to comprehend" from the grammar that a person has.

      Delete
    13. @william.

      Again Charles is being too modest. You should have a look at his Zipf article (http://www.ling.upenn.edu/~ycharles/papers/zipfnew.pdf) and his 2013 PNAS article which both address the Tomasello data head on. The problem is not so much the data, as its interpretation, so your 3 point dissection above isn't really nuanced enough. Charles shows that Tomasello's interpretation of the data fails to take account of expected statistical structure in language use.

      You might also have a look at Valian's experimental work on this topic, which shows quite a different signature from Tomasello's. Valian has a paper about this at
      http://maxweber.hunter.cuny.edu/psych/faculty/valian/docs/Determiners_An_empirical_argument_for_innateness_1.pdf

      None of this is to say that these are knock down arguments one way or the other. But your appraisal of the significance of the Tomasello data doesn't, I think, square with the state of current empirical knowledge about the acquisition of determiners. I think the debate is still very much a live one, and the work you pointed to is not now thought to have the import you give it.

      Delete
    14. @ David

      Thanks for the references - I will look at them.

      I suppose that my larger point is that there seems to be evidence from many corners that there is something like constructions. The question then is how to incorporate this fact into theory, which I take to be the primary concern of the Tomasello types of the world. I completely disagree with the approach that they take, and am trying to formulate the GG approach that does this. Until there is such an approach, I think that the Tomasello and friends' approach will be very attractive to many researchers, and while Jeff's article is spot on, it will remain unconvincing to the target audience.

      Delete
    15. I guess the issue is what you mean by `is' ;-). There are certainly routinized units, which are used in lieu of computing things freshly every time. But CxG, imo, has it completely backwards. It tries to go from nothing except overt input to these routinized units, and has to assume some unspecified story about why the particular abstractions taken give the hierarchical structures we see. Non extendability of analogy is particularly troublesome for this view, which is why CxG analyses are of what they are of - generally the boring bits of syntax. The alternative, which I've occasionally called Construction Grammar in Reverse, is just that we have generative grammars, acquired via UG plus the input, and create heuristics for their use, routinizing expressions, or creating computational shortcuts, in certain cases. Even Chomsky has said as much, I think (somewhere in The Minimalist Program book).

      Delete
    16. Well I totally agree with everything you wrote in this paragraph. Thanks - I will look for the Chomsky quotes. I suppose it would be nice to have a more formal notion of heuristics, routinizing expressions, shortcuts. This is partly what I was trying to get at in my previous post - that there are fully-fledged syntactic representations that correspond to these different levels of representation.

      Delete
    17. @ William, surely you didn't mean to ask this:

      "Has anyone published a theory of the acquisition of e.g. English grammar within the P&P tradition? I.e., what does the grammar of an English-learning child look like at each stage. I mean a detailed theory. As in, what is the initial state, what is each intermediate state, and what is the final state."

      Given that we do not have a complete generative grammar of English, this seems like a seriously premature question, and one which is sort of in opposition to the way scientific inquiry typically proceeds. Rather, what acquisitionists do is to (a) identify a grammatical phenomenon that poses an interesting learnability problem (b) develop a theory of how it *could* be acquired and (c) test whether it *is* acquired that way. To the degree that that phenomenon is acquired that way, then the theory in (b) is validated as a possible explanation. Just as neuroscientists are not asked for theories of how any specific memory gets formed or how any random neuron behaves, linguists do not seek to explain how every specific grammatical feature of a language is acquired. Rather, they identify specific phenomena which they suspect will be enlightening about some aspect of language acquisition (whether it's the initial state, the intake mechanisms, the grammar-updating mechanisms, or the intermediate stages of development) and they probe those phenomena in order to achieve the hoped for enlightenment. Why would you expect more than that?

      Delete
    18. The point of my asking this question is because once you start imaging how such a detailed theory would work, there are many data that are not accounted by that theory. That then raises the question of how to account for them, and we can then ask how problematic this is for the present theory. It wasn't an expectation but a question - and I am curious to imagine what the answer is.

      Analogously, in Syntactic Structures, by proposing a detailed model of English syntax, the flaws in the general theory were exposed. Data not accounted for, generalizations missed, etc. There are probably many data and generalizations that are missed if one were to propose a specific theory of how English grammar is acquired starting from GB-style P&P. This would then raise questions as to whether we can modify the theory appropriately to account for them, or whether we need a new theory.

      "what acquisitionists do is to (a) identify a grammatical phenomenon that poses an interesting learnability problem (b) develop a theory of how it *could* be acquired and (c) test whether it *is* acquired that way"

      Why is this the limit of acquisition research? This seems to indicate that Piaget was not only wrong, but his whole research program was illegitimate.

      Delete
    19. It's not the limit. It's what's done given the general perspective of generative grammar that the point of doing it (whether that's formal grammar, psycholinguistics or acquisition) is to understand the language faculty. Other people have other perspectives. Many developmentalists think that "development" is a thing to be explained and from that perspective, one asks questions about development. In the domain of language, such questions might be (a) given experience of a certain sort, why do learners develop along one trajectory vs another, (b) are all people alike in the order of language development, (c) are all languages alike in the order of language development. Now, these questions would all need to be precisified in order to turn into research questions. And in the ideal world, developmental approaches to language acquisition would intersect in some interesting way with more strictly linguistic approaches. Indeed, the search for this interesting intersection is the reason that the journal Language Acquisition is subtitled "A journal of developmental linguistics" and the reason that our new OUP Handbook is called the Handbook of Developmental Linguistics (and not, say, the handbook of generative approaches to acquisition or something like that). So, research like that started by Nina Hyams on null subjects in child language represents one good example of that intersection. There the observation was not of the "here's an interesting learnability problem" kind but instead a "here's a puzzle about how language develops". In her case, the observation was that children learning English produce sentences without subjects on their way to producing only sentences with subjects. Her initial proposal was that this was a fact about parameter setting, with children going through a stage where their grammars allowed for null subjects in the same way that the grammars of adult Romance languages do. Other people disagreed and put the burden elsewhere (e.g., on prosodic constraints on production; on sentence planning; on Chinese null topic grammar; etc etc). From there, you have interesting scientific debate of the usual kind, with some evidence supporting one view and other evidence supporting another and lots of back and forth. And of course, all of these explanations will require not just a statement of what the right characterization of the "null subject stage" is in acquisition, but also a statement of what fact (or facts) about either the input, the child's intake mechanisms or their grammar updating mechanisms give rise to this stage, so in the end, the right explanation will take the shape of characterizing the input, the intake, the hypothesis space and the updating mechanism.

      Delete
    20. We agree that the point of everything is to understand the language faculty. The issue I have is that all sorts of data are ultimately relevant to our theory of the language faculty, and that the competence/performance distinction is only a methodological heuristic. It is great if troublesome psycholinguistic and acquisition data can be explained by other components of cognition (e.g., if agreement attraction can be explained as a memory interference issue). However, it might be the case that the best explanation for psycholinguistic or acquisition data comes in the form of a different grammatical theory than government and binding, and then it becomes a difficult question whether or not to adopt a different grammatical theory, such as Minimalism enriched with something like constructions. So I suppose my comments are directed towards keeping an eye towards both of these possibilities.

      From where I sit in cognitive neuroscience, much of the interesting data available to me in neuroimaging and neuropsychology require something like constructions, and government and binding theory specifically is certainly not the right level of description for explaining any of this data. I have hunches that this might also be true for many developmental psychologists.

      So I think we disagree about some of the specific troublesome data and whether it is worthwhile to explore grammatical theories beyond Government and Binding.

      Delete
  4. I don't think the Han and Lidz articles are going to convince anyone who is familiar with the literature on child language acquisition.

    The logic seems to run as follows:
    (1) Some speakers of Korean have a verb-raising grammar and some do not--it seems highly variable.
    (2) Kids don't learn their particular grammar from their parents.
    (3) Therefore, some innate factor is driving or influencing the choice.

    The trouble arises from the substantial amount of research documenting peer influence and teacher influence on child language learning--see work by Huttenlocher, Wolfram, Mashburn, and others. There are even suggestions that, by 4 years old, peers are the *primary* source of influence. So the logic reduces to "We've ruled out one exogenous factor, perhaps actually a secondary one, so it must depend on endogenous factors." Saying that "these children come from a relatively homogenous speech community" doesn't help, because of course they're not homogeneous on this particular bit of grammar, or else the authors couldn't've done the study.

    To use an analogy, the logic seems equivalent to the following:

    (1) Some speakers of English speak a rhotic dialect and some do not--it seems highly variable.
    (2) Kids don't learn their particular dialect from their parents.
    (3) Therefore, some innate factor is driving or influencing the choice.

    Doesn't wash.

    ReplyDelete
  5. @Steven. It seems to me that your response is failing to take into account the poverty of the stimulus aspect of the argument. Korean speakers almost certainly don't know, either consciously or unconsciously, whether their peers have verb-raising grammars or not, since the only relevant evidence is from rare constructions. In contrast, it is not difficult to determine whether or not someone speaks a rhotic dialect from listening to them speak. We also have plenty of independent evidence that people do perceive a difference between rhotic and non-rhotic dialects and that this difference is sociologically significant. Korean speakers, on the other hand, are not aware that these two varieties of Korean exist, so it seems unlikely that anyone would come under peer pressure to speak one or the other.

    ReplyDelete
    Replies
    1. Oops--let me clarify. When I was writing about "peer influence," I didn't mean to suggest "peer pressure." That is, I wasn't thinking that kids were teasing each other for verb-raising differently. I simply meant that peers were a source of linguistic information, along with (and perhaps more so than) parents. That holds true even when the learner isn't explicitly aware of the information that is being acquired.

      Your comment brings up another point, though. I don't recall if Han and colleagues checked a corpus to verify that the relevant constructions were rare; as Pullum showed in a different case, you can't just assume that that's true.

      Delete
    2. If you’re ruling out any kind of social pressure to pick one grammar over the other, then it seems you’re suggesting that they may be choosing a grammar at random owing to conflicting evidence rather than owing to the absence of relevant evidence. If that’s true, I’m not sure if it makes any difference to the essential point. Being inconsistent is one way the Stimulus can be Poor.

      Delete
    3. I'd also add that if children are hearing lots of these examples with both scope readings, then a more sensible conclusion would be that verb raising is optional.

      Delete
    4. Yes indeed. So is this evidence against models (like Charles Y's) which maintain a population of grammars, and where we would expect both options to have high probability? I guess it depends whether there are differences in the input data that we don't know about (e.g. from peer groups or just random fluctuations).

      Delete
    5. Charles Y has two learning models ;) More seriously, I believe Jeff and co. have done computational simulations to model their results, which are compatible with some models but not others (i.e., Bayesian), but the results did not go into the paper.

      Delete
    6. @Alex Drummond: Well, there's a lot of daylight between "peer pressure" and "random choice," and I think most of language learning takes place in that sunny expanse.

      Certainly stimulus poverty can involve inconsistent input, but I'm having a hard time imagining how some innate factor could lead learners of Korean to pick one of two possible grammars seemingly unpredictably. How does an innate factor do *that*?

      Or if the argument is that the innate factor reduces the nine possible grammars to the two that we see in the actual language, that still doesn't address how and why learners pick one of those. Moreover, the argument seems to be as follows:

      1) There are nine possible grammars.
      2) The learners only ever end up with one of those two.
      3) Turns out that only two ever appear exogenously (if they appear at all).
      4) Why do the learners end up with one of the two grammars that are exogenously available? We ruled out one of many exogenous sources, so there must be some endogenous contribution.

      Delete
    7. @Steven. As I understand it, the proposal is that kids pick one of the two UG-licensed grammars essentially at random. In other words, in the absence of any evidence in either direction, kids guess. So the answer to the question of how any why kids pick the grammars they pick is pretty straightforward.

      As I said before, I don't think 1-4 is a good summary of the argument because it misses out the POS considerations. But ok, suppose that Korean kids hear lots of relevant examples (presumably from both groups of speakers, given that many kids have parents with different scope judgments), and suppose that they're really good at figuring out speakers' intended scope readings (which of course is no trivial matter). Why, then, do they all end up with grammars where verb raising is either obligatory or illicit? I don't understand how peer influence could plausibly give rise to this distribution. What exactly is the alternative hypothesis here?

      Delete
    8. @Alex Drummond I do understand that POS is part of the argument; that's why I wrote "if they appear at all" in (3). My problem is that the authors don't seem to have supported that part very well. I don't see that they've examined a corpus, if one is in fact available, and they only examined one exogenous source out of many.

      I'm not sure I understand the basis of your question. If you know why the authors looked at parents' grammar, then you know why it'd be useful for them to look at peers too, since it's known that their speech influences each others'. Perhaps you would find that children hear both versions, but settle on whichever one they hear more often. Given that both are common, it's likely that the particular mix children hear will vary greatly, with some hearing primarily one and some hearing primarily the other. (I don't know that they need to hear "lots" of examples.) They never hear any evidence for any of the other seven, so there's no reason to wonder why they don't pick them up.

      Delete
    9. They do need to hear lots of examples, because it's only in cases where the context clearly requires one scope interpretation over the other that such examples are potentially informative. And that is assuming that kids regularly entertain scope interpretations other than those generated by their current grammar, which is a big assumption.

      As far as your suggestion is concerned, unless it's fleshed out a bit more, it seems to predict that kids should never learn grammars with any kind of optionality, as they'll just assume that the option they hear less often is ungrammatical. Even just sticking to scope, that seems to make the wrong prediction for e.g. English, which has lots of scopal ambiguity, even though surface and non-surface scope readings aren't equally common.

      Delete
    10. @Steven P. Yes, we did not report on a corpus analysis. There isn't much available. We looked in a small corpus of newspaper text and found exactly 0 relevant examples, but this did not seem sufficient to warrant inclusion in the paper. More to the point, though, these kinds of sentences are exceedingly rare in those languages for which large corpora are available. But the problem is that the data the learner would need includes not just the occurrence of the sentences, but the occurrence of the sentences in contexts that make one scope reading much more likely than the other. Worse, it's not clear for any ambiguous sentence what the expected value of occurrence for each interpretation should be, so it's not even clear what kind of corpus analysis would be sufficiently convincing here. As discussed in this post:

      http://facultyoflanguage.blogspot.com/2014/11/theres-no-poverty-of-stimulus-pish.html

      children also have biases when it comes to scope that make the input distinct from the child's perception of it. So, while we haven't shown definitively that the relevant variation doesn't exist in children's input, it's a relatively reasonable assumption in the argument.

      Delete
    11. @ Alex C and Charles Y: The finding does not provide evidence against models that maintain multiple populations. Lisa Pearl did some simulations (I think those made it into her dissertation, but I'm not sure) showing that these data were consistent with Yang's Variational model. More precisely if you give Yang's model perfectly ambiguous evidence between two parameter setting, then about 1/2 of the learners choose one grammar and 1/2 choose the other. None of the learners stabilized in a state where they assigned some probability to each of the 2 grammars. It should be obvious why this is (if you flip an evenly weighted coin a bunch of times, you're bound to get some streaks of all heads, which in Yang's model leads to one side of the coin getting extra weight. From there, the rich get richer). The result is incompatible with a naive Bayes approach, because if the data is perfectly ambiguous, the learner will maintain both grammars with equal probability indefinitely. There was also a student of Joe Pater's who modeled this phenomenon in OT.

      Delete
    12. @Jeff Lidz I appreciate the reply. I think my concern really involves this:

      "So, while we haven't shown definitively that the relevant variation doesn't exist in children's input, it's a relatively reasonable assumption in the argument."

      You're suggesting an endogenous component to language, but it seems like you're more or less assuming yourself halfway there. That is, there are two main components of an argument for endogeneity: (1) the kids learn some particular feature of grammar, and (2) they didn't pick it up from the input. The latter point is critical, since we already know that (1) is true.

      I can understand why your approach might be convincing to people who have already bought into that assumption, but for those of us who don't find it as plausible (or for those of us who don't know), it's not as convincing. So I think I'm saying that it depends on your priors.

      Which is not to say that I'm blaming you--it's an interesting feature of Korean, and if there's no corpus, then there's no corpus.

      Delete
    13. Yeah, I think that's fair. What's interesting to me about this is simply the fact that the variation appears to be unpredictable, yet highly abstract, which makes it a new kind of data. Typical poverty of the stimulus arguments are of the form "there are N hypotheses consistent with experience, but people all acquire the same 1". This one is new in that you have the same kind of disconnect between the input and the acquired grammar and there are two possible outcomes, apparently not predictable from experience. When you're in a room with people who ostensibly speak the same language and they disagree so vehemently about the meaning of a sentence like this, it's pretty striking. It's not a "you say potato, i say potahto" situation.

      In any case, I'd be delighted to see someone engage this kind of work by showing how this kind of pattern could emerge without building set of possible grammars in to begin with. As I have repeatedly tried to emphasize, PoS arguments are invitations for solutions to what appear to be puzzles. I just wish more people would recognize that these puzzles are real, whatever solutions they come up with.

      Delete
    14. One of the premier examples of probably not acquired from the input is the some of the behaviors of 'oblique subjects' in Icelandic, some of which are very rare indeed, to the point of no actual observations in corpora so far. But there are things that can be observed without heroic efforts, for example that they can appear in the position between a fronted auxiliary and the next verb. I don't know the actual frequency in texts, but I found one 70 pages into a short story collection. An actual equi-deleted oblique subject took five novels to find; one with something agreeing with it has not yet turned up (but informants seem to basically accept them, and, more importantly, agree on what the case ought to be).

      Another kind of example that can now be observed thanks to Brian MacWhinney's efforts on the CHILDES project is various properties of possessives: recursive possessors such as 'God's mommy's name' occur at 6 per million words in the 13.5 million words of the English corpus (not including pronominal possessors which are arguably different), but I think the real figure is closer to 3/mw because that's the UK figure,and the US one is much higher due mostly to an explosion of them in the brown sarah corpus, while doubly recursive possessors (John's sister's dog's collar) are nonexistent. Child language people won't tell me how many examples per million words is enough for something to be learned as opposed to projected, but I suggest that less than one example per 10 million words is sufficient.

      But techniques for observing the stimulus are improving, and perhaps we will get a better idea of what to look for; it's not obvious for example that # examples per # of words is actually the right thing to look at.

      Delete
  6. Just noticed that in the "Labels", Julien Musolino appears as "Julien Mussolini" :)

    ReplyDelete
  7. This comment has been removed by the author.

    ReplyDelete
  8. I thought these remarks by Pinker might be of interest:

    here

    ReplyDelete
  9. Thanks for sharing, nice post! Post really provice useful information!

    FadoExpress chuyên dịch vụ chuyển phát nhanh siêu tốc đi khắp thế giới, nổi bật là dịch vụ gửi hàng đi mỹ, gửi hàng đi úc uy tín, giá rẻ.

    ReplyDelete