tag:blogger.com,1999:blog-5275657281509261156.post8565809680817908005..comments2024-03-26T03:22:12.899-07:00Comments on Faculty of Language: The two PoSs againNorberthttp://www.blogger.com/profile/15701059232144474269noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-5275657281509261156.post-3993330534506766152014-10-26T12:50:43.650-07:002014-10-26T12:50:43.650-07:00Don't believe I said that they don't need ...Don't believe I said that they don't need explanation. Rather, they may need a different kind of explanation. But, yes, as is not unusual, I think we think of these issues differently. Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-43547620682670599062014-10-26T10:05:48.520-07:002014-10-26T10:05:48.520-07:00I agree with most of that (though I am a little su...I agree with most of that (though I am a little surprised that you don't think observable universals need explanation) ; it just seems that they are two different arguments that have related conclusions -- namely that there is some, not necessarily domain specific, structure in the LAD. I just don't see where either the NEDP nor the CDP figure as a premise in that argument, whereas I do see their role in what you call POS2. So I guess that is not just a terminological problem.Alex Clarkhttps://www.blogger.com/profile/04634767958690153584noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-70734128769397424442014-10-26T07:56:48.498-07:002014-10-26T07:56:48.498-07:00The two arguments, I believe, aim at different kin...The two arguments, I believe, aim at different kinds of mechanisms. PoS1 considers how FL operates in an environment where the quality of the incoming data is perfect, though limited in kind to roughly degree 0+ data. Considering how LAD operates in this kind of environment can (and has) told us a lot about the structure of FL/UG. <br /><br />PoS2 relaxes the perfect environment assumption. This raises issues additional to those addressed by PoS1. Thus, data massaging problems, how LAD does this when the deficient data is also dirty, also tells us something about the basic properties of LAD.<br /><br />Think balls and inclined planes. Frictionless ones focus our attention on gravitational constants, real ones add in coefficients of friction. Both useful, but not the same.<br /><br />I believe that breaking the problem up in this way is useful (on analogy with the inclined plane problem). I think that I might not quite agree with your terse and useful summary, btw. It partly depends on what one means by a property P. My aim is to understand the structure of FL. I do this by investigating how FL would account for linguistic properties I have identified. But the properties are probes into FL, not targets of explanation for their own sake. On this view of things, distinguishing methods for addressing different features of FL is useful. That's what I think PoS1 vs PoS2 can do. So, the aim is not to explain language data, but to investigate properties of FL using language data and for this the distinction has been useful.Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-5012654023594265272014-10-26T03:51:46.818-07:002014-10-26T03:51:46.818-07:00This may be just a terminological question but why...This may be just a terminological question but why is POS1 a poverty of the stimulus argument?<br />If the problem is to explain why all attested human languages have some property P, then I don't see why this relates to the amount of data available to the learner, since even if there is abundant information in the input, we still need to explain why the languages have that property.Alex Clarkhttps://www.blogger.com/profile/04634767958690153584noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-72263413464097231322014-10-26T00:23:08.909-07:002014-10-26T00:23:08.909-07:00I sent Norbert a short note on parameter setting w...I sent Norbert a short note on parameter setting which is discussed in this post and a few others in the past. He asked me to post it here.<br /><br />--<br />On your latest post, you mentioned again necessity of incremental parameter setting a la Dresher/Lightfoot/Fodor-Sakas. The problem is serious indeed, if parameter setting is deterministic; the only solution seems to be theirs, by specifying a sequence of parameters and their associated cues. <br /><br />But if parameter setting is probabilistic, this need seems to go away. Take the standard example for setting V2 and OV/VO in a language like German. These two parameters are not independent, and there isn’t a single sentence that can set the two parameters simultaneously: e.g., SVO is ambiguous. A solution is to build in the sequence to ensure that the VO/OV parameter is set before V2.<br /><br />Consider probabilistic setting like the one in my thesis. The learner probabilistically but independently chooses the values for V2 and VO/OV. For the latter parameter, the existence of “O participle” will gradually nudging it to OV, meanwhile the V2 parameter will be stumbling up and down rather aimlessly. Over time, the OV/VO parameter will gradually get closer to the target thanks to the cumulative effects of patterns such as "O participle", i.e, the learner is more and more likely to choose OV: whenever OV is chosen, a string like SVO is no longer ambiguous--only the choice of V2 (or V raising high, glossing over other details) will succeed.<br /><br />In other words, the logical dependence between the parameters needn’t be built explicitly to guide the learner (as cues): probabilistic trial and error will do the job, even if the “later” parameters in the sequence will be wandering around for a while becoming shooting toward to the target value once the earlier parameters are in place.<br />—<br /><br />At some later point, I will report some results that a few friends and I have obtained regarding the smoothness of the parameter space, which is good news for this and indeed all kinds of parameter setting models.Charles Yanghttps://www.blogger.com/profile/06041398285400095406noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-52307102040157980442014-10-26T00:20:29.413-07:002014-10-26T00:20:29.413-07:00This comment has been removed by the author.Charles Yanghttps://www.blogger.com/profile/06041398285400095406noreply@blogger.com