tag:blogger.com,1999:blog-5275657281509261156.post5358682275568112605..comments2024-03-28T04:04:55.806-07:00Comments on Faculty of Language: The future of (my kind of) linguisticsNorberthttp://www.blogger.com/profile/15701059232144474269noreply@blogger.comBlogger26125tag:blogger.com,1999:blog-5275657281509261156.post-8786059234305891802017-10-26T17:24:18.189-07:002017-10-26T17:24:18.189-07:00Also, see a very cool recent talk on a similar sub...Also, see a very cool recent talk on a similar subject by Alessandro Tavano at MPI on how varying constituent size is tracked by oscillation phase in the same way as Ding showed.<br />https://www.youtube.com/watch?v=4U0iGykpF-oLaszlohttps://www.blogger.com/profile/14737859384240135780noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-21179065944362755702017-10-26T17:15:32.676-07:002017-10-26T17:15:32.676-07:00@Thomas and Alex: Yeah, sorry, i didn't want t...@Thomas and Alex: Yeah, sorry, i didn't want to go into a debate about WHAT the best characterization is, only that such conversations are crucial, yet their outcome inevitably runs into the Granularity Mismatch Problem regardless of the answer. <br /><br />@Norbert: Agreed on the issue of hierarchy and recursion, but unless I misunderstand, if you have Int./Ext. merge, or merge/move a la Stabler, you get both for free? One tentative hypothesis is that they're encoded by phases, which Nai Ding showed in 2016, and which Elliot Murphy has a Biolinguistics article on.<br /><br />The really thorny issue is what level of neural computation we wanna work at. Single-neuron? spike-train? coupled population? dendritic tree? Molecules a la Gallistel? All have computational properties which might encode set-like objects (see Izhikevich's dense but very instructive book, or Christof Koch's kinda dated but still very helpful "Biophysics of computation", just for single neurons!). I agree that it's a mess, because there's not much cross-talk about exactly WHICH level we wanna work with, but I think that's the interesting stuff! This mess might be the super hard problem you were talking about, though Eric Hoel from Columbia argues(http://www.mdpi.com/1099-4300/19/5/188) that it isn't really a problem. Tbh I also don't know to fix it, nor do i know how much of the onus is on GG syntax to save the day here. How do you envision ways neuroscientists could help MP linguists out?Laszlohttps://www.blogger.com/profile/14737859384240135780noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-66873934157879633132017-10-26T14:57:21.050-07:002017-10-26T14:57:21.050-07:00Norbert’s point was less subtle. He knows of nothi...Norbert’s point was less subtle. He knows of nothing indicating that neuro types know how to reprent hierarchy or algebraic structure neurally. Here he is just echoing Dehaene. So the idea that this is what is getting in the way of collaboration seems implausible. I added that thereare currently no ideas of how to integrate unbounded hierarchy (recursion) neurally either. So, the debate, though heated, seems to me to take as a premise what nobody right now has the fainest idea of how to execute, at least of Dehaene is to be believed (and I believe him).Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-84473075989848861662017-10-26T13:44:10.474-07:002017-10-26T13:44:10.474-07:00As long as the 'sets' are created and mani...As long as the 'sets' are created and manipulated via an abstract interface (without access to the underlying tuple), they can satisfy indempotency etc. After all, you wouldn't say that a Python hashtable doesn't really have the properties of a hashtable because there's an underlying array of buckets.<br /><br />The idea that Chomsky's use of 'set' is a barrier to cross-disciplinary work seems a little specious to me. Do you know of any neuroscientists who would like to show that generative syntacticians are wrong? They probably outnumber the ones who'd like to do the opposite! So, great, they can show us the evidence that syntactic structures are encoded using (say) ordered pairs rather than unordered pairs as primitives. I'm sure every syntactician would be surprised and delighted at the result.Alex Drummondhttps://www.blogger.com/profile/04676457657606185543noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-12068881060552770072017-10-26T12:29:39.998-07:002017-10-26T12:29:39.998-07:00Laszlo said that people characterize primitives li...Laszlo said that <i>people characterize primitives like Merge as set-building, while neuro (and CS) results show that set-building is complicated and expensive compared to other ways.</i> Norbert found that surprising, so I added a few examples why sets can be regarded as complex data structures in CS rather than primitives. We can quabble about the details of what that entails for Minimalism, but if the compromise position is "we don't mean sets when we say sets, and it is okay to implement them as ordered structures that do not satisfy idempotency", then Laszlo has a point that phrasing Merge in terms of set-formation makes bridging the gap between disciplines harder rather than easier.Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-3015632078343266892017-10-26T11:57:29.115-07:002017-10-26T11:57:29.115-07:00@Thomas. My point re Python was that the implement...@Thomas. My point re Python was that the implementation details of the Python stdlib have nothing to do with what we're talking about here. Bringing up these details was potentially confusing to people who aren't programmers, as it appeared to suggest that there's some inherent difficulty with implementing sets of sets, when this is not the case.<br /><br /><i> So why say "set" in the first place?</i><br /><br />It's a bad terminological decision. Bad terminology abounds.<br /><br /><i>A faithful implementation of syntactic sets that has all the properties syntacticians want and none of the properties they do not want would be hard.</i><br /><br />There's nothing at all hard about it. It's spreading FUD to suggest that Minimalist syntacticians' half-assed use of the term 'set' somehow presents a deep puzzle about how syntactic structures are to be encoded for computational purposes. You can just use an ordered pair and ignore the order. Whether or not that's a "faithful" implementation of sets with <= 2 members is a largely philosophical question. Alex Drummondhttps://www.blogger.com/profile/04676457657606185543noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-59403493019055559582017-10-26T11:21:34.646-07:002017-10-26T11:21:34.646-07:00@Alex: I took Norbert's question to be about s...@Alex: I took Norbert's question to be about sets VS graphs in general. And there I do find it insightful to look at how sets are actually implemented because it really shows that the idea of sets as intrinsically simple objects does not hold. When Minimalists say "set", they don't mean sets, at best they mean a very specific subcase of sets. So why say "set" in the first place?<br /><br />Your implementation of sets as tuples is not limited to sets with just 2 members, it works for any finite set if you don't care about efficient membership tests (if you have way of lazily instantiating a list, even some infinite sets can be implemented this way). But then we're back to the old question whether syntactic sets are unordered or the order simply isn't used. For some reason, the latter position is unpopular. A faithful implementation of syntactic sets that has all the properties syntacticians want and none of the properties they do not want would be hard.<br /><br />(As for Python sets not being hashable, I briefly mentioned frozensets as the immutable, hashable, and thus nestable counterpart to sets. That said, I've never found a real-world use case for them, I can't even think of a good reason to use sets as keys. *shrug*)Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-30926192982313346542017-10-26T10:39:01.968-07:002017-10-26T10:39:01.968-07:00@Laszlo:
Thx for the overly extensive bibliography...@Laszlo:<br />Thx for the overly extensive bibliography. My info concerning the current state of the Euro art come from the paper by Dehaene and Co that I discuss here:(http://facultyoflanguage.blogspot.com/2016/09/brain-mechanisms-and-minimalism.html). They observe that how to implement algebraic and hierarchical structure neurally is currently unknown. Are they wrong? Do we have a good understanding of how this is done. I should add that even if one can do both of these do we have a good idea how wetware implement recursion? Given that linguistic objects are algebraic, and hierarchical and recursive it would seem that how neurons do this sorts of things is currently unclear, at least if Dehaene is to be believed. None of this is to say that people are not working on these issues, but there is a difference between doing so and having actually made progress relevant to what the behavioral studies show must be inside the brain.Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-1364939712698902342017-10-26T10:23:55.711-07:002017-10-26T10:23:55.711-07:00(Well, actually, Data.HashSet in Haskell is deprec...(Well, actually, Data.HashSet in Haskell is deprecated in favor of Data.Set, which doesn't use a hashtable implementation; but in any case it is immutable and allows sets of sets.)Alex Drummondhttps://www.blogger.com/profile/04676457657606185543noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-69635830056670201152017-10-26T09:56:49.549-07:002017-10-26T09:56:49.549-07:00@Thomas. As you know, most flavors of Minimalist s...@Thomas. As you know, most flavors of Minimalist syntax assume that the sets in question have at most two members. It is trivial to construct an efficient implementation of the basic set operations for sets with <= 2 members, so I don't see how it's relevant that sets in general are a relatively complex data structure to implement. It's not as if Minmalist syntacticians need efficient set membership tests for arbitrarily large sets of syntactic objects.<br /><br />I do agree that a lot of Minimalist syntacticians seem to think that the 'sets vs ordered pairs' issue is vastly more important than it actually is. But I don't think that the (admittedly idiosyncractic) decision to insist on sets as the primitive raises any real implementation problems. If I wanted to implement Minimalist-style 'sets' in Python, I would just implement them as 2-tuples and ignore the ordering information thereby encoded. Easy.<br /><br />By the way, sets are not hashable in Python simply because the Python stdlib happens to implement them as mutable objects (lists are also unhashable for the same reason). It has nothing really to do with properties of sets as such. If Python treated sets as immutable (as it does e.g. strings, numbers, tuples), then there would be no difficulty in hashing sets of hashable objects. So e.g. Haskell's (immutable) Data.HashSet is itself hashable.Alex Drummondhttps://www.blogger.com/profile/04676457657606185543noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-9015820229290671272017-10-26T09:54:33.783-07:002017-10-26T09:54:33.783-07:00This comment has been removed by the author.Alex Drummondhttps://www.blogger.com/profile/04676457657606185543noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-4046141485807146592017-10-26T09:33:52.132-07:002017-10-26T09:33:52.132-07:00If I recall, there was a couple huge threads here ...If I recall, there was a couple huge threads here on why (or not) MP is done in sets. I think some references were there? At least from the CS side. <br /><br />On the second point, there is a huge literature on brain computation at any level you choose. Here is a link to an annotated database of journals and special issues, http://home.earthlink.net/~perlewitz/journals.html<br />and it's growing every day. Laszlohttps://www.blogger.com/profile/14737859384240135780noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-89811118015813415742017-10-26T07:02:10.640-07:002017-10-26T07:02:10.640-07:00Addendum: Somewhat ironically, when sets aren'...Addendum: Somewhat ironically, when sets aren't implemented as hash tables, they're implemented as trees (e.g. binary search trees).Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-56783591716724891712017-10-26T06:39:35.328-07:002017-10-26T06:39:35.328-07:00@Norbert: Laszlo might be thinking of something el...@Norbert: Laszlo might be thinking of something else, but if you look at the algorithms literature and various programming languages, sets are indeed rare and definitely not a primitive data structure.<br /><br />When sets are used, they often aren't sets in the mathematical sense. In Python, for example, sets are implemented as hash tables. That's a fairly complicated object. It also means that Python sets are ordered, but the order is arbitrary and cannot be relied on. And they cannot be nested, because sets themselves aren't hashable. So if you want nested sets, you need the FrozenSet data type, which adds additional complexities.<br /><br />When it comes to efficiency, sets are much faster for membership tests than lists (because they are hash tables), but iterating over a set takes longer than iterating over a list. But the membership advantage only holds for large sets, lists with less than 5 items are usually faster because sets come with a certain overhead. Since Bare Phrase Structure sets never have more than two members, you'd be better of using a list (which would also get you around the problem with sets not being nestable).<br /><br />Whether graphs are hard to implement depends on your specific encoding. A list of tuples is certainly easier to create and manipulate than sets. Highly efficient graph encodings will be more complex.<br /><br />Trees, when represented as Gorn domains, are just sets/lists of tuples of natural numbers. Btw, encoding trees as Gorn domains also immediately gives you linear order between siblings; a comparable encoding that makes linear order impossible to reference is harder to implement, not easier.Anonymoushttps://www.blogger.com/profile/07629445838597321588noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-1627233236618270572017-10-25T13:50:06.890-07:002017-10-25T13:50:06.890-07:00MP provides the parts list if it shows that it can...MP provides the parts list if it shows that it can recover much/most of the GB facts. This is the test of an MP: to show how it gives you something like GBish universals properly understood. Chomsky does sopped of this when he shows that Merge properly understood delivers structure dependence, reconstruction etc. I think that one needs to go further (as I've suggested in various recent posts concerning the Merge Hypothesis and the Extended Merge Hypothesis. Once this has been done, and I believe that we can see a light at the end of this tunnel, though there are some serious parts of GB that are proving hard to "reduce"/"unify," then MP will be over and we will have shown what an FL must look like to derive GB. The FL will have linguistically proprietary parts and domain/cognitively general parts. This division will provide things for cog-neuro to look for.<br /><br />We might even go further: and show how syntactic and phonological and semantic structure are related. This is what Idsardi, Pietroski, Heinz, Graff etc have been doing. <br /><br />If we got these two things then the classical Chomsky project, IMO, would be over. We will have shown the unifying structure underlying G phenomena and separated G structure from Cog structure more generally in the operations of FL. This would then provide the request parts and operations list. <br /><br />Are we there yet? Not quite. Are we close? I believe we are.Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-77106086919665939722017-10-25T12:02:35.147-07:002017-10-25T12:02:35.147-07:00But if Merge is generally thought to be the primit...But if Merge is generally thought to be <em>the</em> primitive, and constantly advocated to be so, what's there left for MP to do? Shouldn't that mean the goal of providing the parts list has actually been reached?Pedro Tiago Martinshttps://www.blogger.com/profile/04293569490494350310noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-27838457018911303222017-10-25T10:43:09.229-07:002017-10-25T10:43:09.229-07:00They show this? Really. Could you give me some ref...They show this? Really. Could you give me some references. If this were so it owuld be truly impressive. So sets are hard nut, say, graphs are not? <br /><br />Let’s say you are right. Then the job will be to frame the same results in a difgerent more amenable idiom. I cannot wait for that day to be upon us. But so far as I know, we have cery little idea of how brian circuits actually compute. We know where the computations might be going on, but we have no account of many of the operations we believe are being co puted. Dehaene discusses this in a paper recently in Neuron (I think). At any rate, I will personally assume full responsibility for solving this conundrum when it arises. May its day come soon. Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-76227285048093175552017-10-25T10:07:53.819-07:002017-10-25T10:07:53.819-07:00Totally on board with that. But to me, having the ...Totally on board with that. But to me, having the right primitives will not help anyone if they are not amenable to neurobiological work. This matters when, say, people characterize primitives like Merge as set-building, while neuro (and CS) results show that set-building is complicated and expensive compared to other ways. Not that the primitives aren't correct, but that there's a mismatch. Whose job is it to resolve them if not the people proposing the primitives in the first place? Laszlohttps://www.blogger.com/profile/14737859384240135780noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-38455328055988881262017-10-25T08:00:24.457-07:002017-10-25T08:00:24.457-07:00AD is right that it is very difficult, even if one...AD is right that it is very difficult, even if one has a decent linguistics parts list. My claim was that my kind of linguistics will have done its job if it manages to distill linguistic competence down to a small set of primitives. I believe that MP is close to having something like this. I also believe that the utility of what MP has managed to do is being recognized by some in the cog-neuro of language community. Does this mean that once delivered there will be no more cog-neuro issues? Hell no. Thee is a good distance between the correct specification of the computational problem (in Marr's terms) and the realization in wetware problem. So getting something simple we can print on a T shirt is a contribution to the overall aims of cog-neuro, though it is by no means the whole ball of wax. Indeed, it may not even be the hardest part of the problem, which is why we may solve it first.Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-9258157966476646142017-10-25T07:37:08.949-07:002017-10-25T07:37:08.949-07:00But isn't that the point of this post? Or have...But isn't that the point of this post? Or have I misunderstood parts like "The aim will be achieved when MP distills syntax down to something simple enough for the cog-neuro types to find in wet ware circuits, something that can be concisely written onto a tee shirt."?Laszlohttps://www.blogger.com/profile/14737859384240135780noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-16119203052209034122017-10-25T04:55:06.504-07:002017-10-25T04:55:06.504-07:00Laszlo: It’s not helpful (or true) to talk about g...Laszlo: It’s not helpful (or true) to talk about generative linguists “refusing” to engage with work in neuroscience. Cross-disciplinary work is very difficult and very risky, and sometimes two fields just aren’t at a stage where they can productively talk to each other, even with the best of intentions on all sides.<br /><br />Alex Drummondhttps://www.blogger.com/profile/04676457657606185543noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-55955952907734226972017-10-24T11:41:48.362-07:002017-10-24T11:41:48.362-07:00I agree that neuro-superiority is dumb and a natur...I agree that neuro-superiority is dumb and a natural science disease. Totally fair. But to be honest there is the same thing from GGers, esp in relation to people who are closer to neuro and comp ling (see Thomas's great earlier post abt that). sometimes well-deserved, but largely unhelpful. I also agree that Stabler and co's work is a major step in fixing this, as is the complexity work of Graf, Heinz, Idsardi, etc. (which is largely theory-neutral). Work by Hale, Brennan, Matchin, Lau, etc etc are so important, but this is more than 2 decades after Chomsky MP, and almost 2 after the start of stabler MG and the ensuing parsing work. One way I think Poeppel has been helpful is by creating a hypothesis (oscillatory chunking) that allows traditional debates about the role of linguistics in cogsci to be more accessible to neuro in a way that neuro can engage meaningfully. Not that they must be appeased, just that they often don't care (bad neuro!). I'm not hardcore committed to computational theory of mind, but it's the best thing we have to build a cogsci and linguists are resistant more than neuro is.<br /><br />Another facet is that many linguists and esp neurolinguists are totally unaware of the relevance of these studies and/or actively dismiss them as "just computational stuff" (again, see Graf's post). There is way more neuro work on embodied language, metaphor, basic consituency, and small-level composition than there is on, say, whether BOLD signal tracks an MG Parser vs a bottom-up CFG parser. Neuro types (esp those with a bit of CS) love the latter, but get the former, because GGers refuse to engage them in terms of what builds on their work, ie crafting a cogsci of language. The same happens across any cog discipline. It's just that GGers have the potential to bridge and (largely) don't.Laszlohttps://www.blogger.com/profile/14737859384240135780noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-92133684776185839622017-10-24T11:37:37.074-07:002017-10-24T11:37:37.074-07:00This comment has been removed by the author.Laszlohttps://www.blogger.com/profile/14737859384240135780noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-64597429528485280192017-10-24T09:59:36.050-07:002017-10-24T09:59:36.050-07:00Yes it is. But I am looking at the stuff by Dehaen...Yes it is. But I am looking at the stuff by Dehaene as indicator that something like Merge can be usefully deployed as a complexity measure. IN fact, the old DTC looks pretty good with a Merge based G at its base. At any rate, I do not expect agreement and am happy that the idea of distilling GG to its primitives appeals to you. <br /><br />As for GG's resistance to mathematical results, I disagree. There is no resistance at all. Stabler and his crew have pretty good press in the MP community. What GGers resist are results that are not of the right grain given what we think Gs look like. So, CFGs are well studied but are not the right kinds of Gs. Moreover, many math "results" are at right angles to what GGers think are vital. This does not extend to Stabler et al who try to address issues that GGers find important. But this is not always the case. One more thing GGers rightly resist is neural imperialism. There seems to be a though out there that euro types have their hands on the right methods and models. As far as language goes, this strikes me as quite wrong. One of the thing I like about Poeppel and Dehaene and younger colleagues like Brennan and Lau is that they understand that linguists have material to bring to the table and that stubbornness on the part of linguists is often because of disdain and ignorance and highhandedness on the part of the neuroscientists. I should add that I am getting a whiff of this from the tone of your remarks in the last paragraph. Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-32278775263125658362017-10-24T09:34:09.966-07:002017-10-24T09:34:09.966-07:00I agree about distilling GG to its computational p...I agree about distilling GG to its computational primitives, but I disagree that MP is the right level of Granularity, if only because Poeppel/Embick emphasize that granularity mismatch is the real issue, not that we can find "correct" levels of granularity. This mismatch, and the relevant ways to link neuronal computation with linguistic (or, more broadly, cognitive) computation is almost certainly off the table currently. Tecumseh Fitch makes this point in his 2014 review. Some real strides in this area have been the striatal loop/prediction error results of Schultz and co, and the place cell work by the mosers, but there is nothing remotely close for language. Even Poeppel's oscillatory work on linguistic chunking is still mostly tied to perceptual systems. <br /><br />I'd also like to point out the general resistance of GG linguists to mathematical results which have direct application to their work. This directly inhibits the ability of linguistics to interface with neuroscience, which has very vibrant mathematical/computational communities. Even if the math doesn't line up perfectly, it at least fosters conversation in a way that current linguistics absolutely doesn't.Laszlohttps://www.blogger.com/profile/14737859384240135780noreply@blogger.com