tag:blogger.com,1999:blog-5275657281509261156.post1695291640971452585..comments2023-10-02T03:42:32.434-07:00Comments on Faculty of Language: Competence and Performance RedescribedNorberthttp://www.blogger.com/profile/15701059232144474269noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-5275657281509261156.post-84091246898011516182013-02-01T11:08:10.305-08:002013-02-01T11:08:10.305-08:00I think an interesting alternative possibility is ...I think an interesting alternative possibility is that UG might instead be a collection of tools that come out of the box as separate units that can be put together in whatever way (provided the input/output properties of each are met). As a rough analogy, think of a programming language, where you have some well defined list of primitives and means of combination, but you put them together and get an infinite number of different programs. I don't know if this line of research has really been pursued, but it seems like the kind of thing that could give rise to the large scale effects that cut across particular phenomena, e.g. locality. Lots of things seem to employ nearness constraints of one sort or another, but maybe not in exactly the same way. Maybe it's because there's some common computational core that gets plugged into different parts of the system. It's that underlying device which has nearness effects, but the details of the effect in different cases depend on what that device is used for.<br /><br />I think this is pretty distinct from the specification view, but probably has more plausibility in terms of what could actually be coded on the genome.Darryl McAdamshttps://www.blogger.com/profile/17446496860034614969noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-30211119739366391512013-02-01T06:51:46.026-08:002013-02-01T06:51:46.026-08:00I buy Darryl's amendment. It is a specificatio...I buy Darryl's amendment. It is a specification of data structures. Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-78272313421262304422013-01-30T03:19:00.568-08:002013-01-30T03:19:00.568-08:00I'm not sure I really get the analogy. Partly ...I'm not sure I really get the analogy. Partly this is because the line between data and program is not as clear as people want to say it is. Church encodings make this point fairly clearly: what is a pair in the lambda calculus? Well, you could implement it in some particular way in your computer, as say a memory location with two parts. Then it definitely looks "data"-y. But that's merely one choice. The pure LC doesn't have this option (there's no memory to speak of). So what did Church do? Well, he realized that what makes a pair a pair is something more like an algebraic specification using three operations which I'll call "pair" (for making pairs), "fst" (for getting the first element), and "snd" (for getting the secondsecond):<br /><br /> fst (pair a b) = a<br /> snd (pair a b) = b<br /><br />It doesn't matter how "pair", "fst", and "snd" are implemented so long as these equations hold. So Church came up with the following definitions (using \ for lambda):<br /><br /> pair = \x. \y. \f. f x y<br /> fst = \p. p (\x. \y. x)<br /> snd = \p. p (\x. \y. y)<br /><br />And we can now check that the equations hold and they do. But where's the data? As Jerry Sussman said about this in his SICP lectures, we have "data" thats made of nothing but air.<br /><br />So what really _is_ data, and how can we analogize UG to this? I think perhaps a better analogy really is to the specifications there. UG isn't the data structures themselves, but the specifications that the implementations must satisfy. Some implementations might satisfy the specification quite well, while others might do it poorly. But the specification is well defined independent of the implementations.Darryl McAdamshttps://www.blogger.com/profile/17446496860034614969noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-88511004161099220032013-01-29T15:13:56.539-08:002013-01-29T15:13:56.539-08:00"So, the theory of competence can be viewed a..."So, the theory of competence can be viewed as the theory of the linguistic DaSts; what are their primitive features? How are the[y] assembled? What kinds of relations do they encode?"<br /><br />Could you say a little bit more on why you think that "How are they [= the DaSts?] assembled" is a question that a competence theory needs to answer?<br />(Probably I'm misunderstanding what you take this question to be, but to me any answer to this seems to require specification of an algorithm to actually build/assemble data structures. At the very least, this "how" question strikes me as rather different from the other two "what"-questions.)benjamin.boerschingerhttps://www.blogger.com/profile/00894608988488218285noreply@blogger.com