Monday, October 23, 2017

The future of (my kind of) linguistics

I have been pessimistic of late concerning the fate of linguistics.  It’s not that I think it is in intellectual trouble (I actually cannot think of a more exciting period of linguistic research), but I do think that the kind of linguistics I signed up for as a youth is currently lightly prized, if at all. I have made no secret of this view. I even have a diagnosis. I believe that the Minimalist Program (MP) has forced to the surface a tension that was inchoate in the field since its inception 60 or so years ago. Sociologically, within the profession, this tension is becoming resolved in ways that disfavor my conception of the enterprise. You have no doubt guessed what the tension resides in: the languist-linguist divide. Languists and linguists are interested in different problems and objects of study. Languists mainly care about the subtle ways that languages differ. Linguists mainly care about the invariances and what these tell us about the overarching capacities that underlie linguistic facility. Languists are typologists. Linguists are cognitivists.

Before the MP era, it was pretty easy to ignore the different impulses that guide typological vs cognitive work (see here for more discussion). But MP has made this harder, and the field has split. And not evenly.  The typologists have largely won, at least if one gauges this by the kind of work produced and valued. The profession loves languages with all of their intricacies and nuances. The faculty of language, not so much. As I’ve said many times before, and will repeat again here, theoretical work aimed at understanding FL is not highly valued (in fact, it is barely tolerated) and the pressures to cover the data far outweigh demands to explain it. This is what lies behind my pessimistic view about the future of (my kind of) linguistics. Until recently. So what happened? 

I attended a conference at UMD sponsored by BBI (Brain and behavior initiative) (here). The workshop brought together people studying vocalization in animals and linguists and cog-neuro types interested in language.  The goal was to see if there was anything these two groups could say to one another. The upshot is that there were potential points of contact, mainly revolving around sound in natural language, but that as far as syntax was concerned, there is little reason to think that animal models would be that helpful, at least at this point in time. Given this, why did I leave hopeful?  Mainly because of a great talk by David Poeppel that allowed me to glimpse what I take to be the future of my brand of linguistics. I want to describe to you what I saw.

Cog-neuro is really really hard. Much harder than what I do. And it is not only hard because it demands mastery of distinct techniques and platforms (i.e. expensive toys) but also because (and this is what David’s talk demonstrated) to do it well presupposes a very solid acquaintance with results on some branch of cognition. So to study sound in humans requires knowing a lot about acoustics, brain science, computation, and phonology. This, recall, is a precondition for fruitful inquiry, not the endpoint. So you need to have a solid foundation in some branch of cognition and then you need to add to this a whole bunch of other computational, statistical, technical and experimental skills. One of the great things about being a syntactician is that you can do excellent work and still be largely technically uneducated and experimentally inept. I suspect that this is because FL is such a robust cognitive system that shoddy methods suffice to get you to its core general properties, which is the (relatively) abstract level that linguists have investigated. Descending into wetware nitty gritty demands loosening the idealizations that the more abstract kind of inquiry relies on and this makes things conceptually (as well as practically) more grubby and difficult. So, it is very hard to do cog-neuro well. And if this is so, then the aim of cognitive work (like that done in linguistics) is to lighten cog-neuro’s investigative load. One way of doing this is to reduce the number of core operations/computations that one must impute to the brain. Let me explain.

What we want out of a cog-neuro of language is a solution to what Embick and Poeppel call the mapping problem: how brains execute different kinds of computations (see here). The key concept here is “the circuit,” some combination of brain structures that embody different computational operations. So part of the mapping problem is to behaviorally identify the kinds of operations that the brain uses to chunk information in various cognitive domains and to figure out which brain circuits execute them and how (see here for a discussion of the logic of this riffing on a paper by Dehaene and friends). And this is where my kind of linguistics plays a critical role. If successful, Minimalism will deliver a biologically plausible description of all the kinds of operations that go into making a FL.  In fact, if successful it will deliver a very small number of operations very few of which are language specific (one? Please make it one!) that suffice to compute the kinds of structures we find in human Gs. In this context, the aim of the Minimalist Program (MP) is to factor out the operations that constitute FL and to segregate the cognitively and computationally generic ones form the more bespoke linguistic ones. The resulting descriptive inventory provides a target for the cog-neuro types to shot at.

Let me say this another way. MP provides the kind of parts list Embick and Poeppel have asked for (here) and identifies the kinds of computational structures that Dehaene and company focus on (here). Putting this another way, MP descriptions are at the right grain for cog-neuro redemption. It provides primitives of the right “size” in contrast to earlier (e.g. GBish) accounts and primitives that in concert can yield Gs with GBish properties (i.e. ones that have the characteristics of human Gs).

So that’s the future of my brand of linguistics, to be folded into the basic wisdom of the cog-neuro of language. And what makes me hopeful is that I think that this is an attainable goal. In fact, I think that we are close to delivering a broadly adequate outline of the kinds of operations that go into making a human FL (or something with the broad properties of our FL) and separating out the linguistically special from the cognitively/computationally generic. Once MP delivers this, it will mark the end of the line of investigation that Chomsky initiated in the mid 1950s into human linguistic competence (i.e. into the structure of human knowledge of language). There will, of course, be other things to do and other important questions to address (e.g. how do FLs produce Gs in real time? How do Gs operate in real time? How do Gs and FLs interact with other cognitive systems?) but the fundamental “competence” problems that Chomsky identified over 60 years ago will have pretty good first order answers.

I suspect that many reading this will find my views delusional, and I sympathize. However, here are some reasons why I think this.

First, I believe that the last 20 years of work has largely vindicated the GB description of FL. I mean this in two ways: (i) the kinds of dependencies, operations, conditions and primitives that GB has identified have proven to be robust in that we find them again and again across human Gs. (ii) these dependencies, operations, conditions and primitives have also proven to be more or less exhaustive in that we have not found many additional novel dependencies, operations, conditions and primitives despite scouring the world’s Gs (i.e. over the last 25 years we have identified relatively few new potential universals). What (i) and (ii) assert is that GB identified more or less all the relevant G dependencies and (roughly) accurately described them. If this is correct (and I can hear the howls as I type) then MP investigations that take these to be legit explananda (in the sense of providing solid probes into the fundamental structure of FL) is solid and that explaining these features of FL will suffice to explain why human FLs have the features they do. In other words, deriving GB in a more principled way will be a solid step in explaining why FL is built as it is and not otherwise.

Second, perhaps idiosyncratically, I think that the project of unifying the modules and reducing them to a more principled core of operations and principles has been quite successful (see three part discussion ending here). As I’ve argued before, the principle criticisms I have encountered wrt MP rest on a misapprehension of what its aims are.  If you think of MP as a competitor to GB (or LFG or GPSG or Construction Grammar or…) then you’ve misunderstood the point of the program. It does not compete with GB. It cannot for it presupposes it. The aim is to explain GB (or its many cousins) by deriving its properties in a more principled and perspicuous way. This would be folly if the basic accuracy of GB was not presupposed. Furthermore, MP so understood has made real progress IMO, as I’ve argued elsewhere. So GB is a reasonable explanandum given MP aims and Minimalist theories have gone some way in providing non-trivial explananses.

Third, the MP conception has already animated interesting work in the cog-neuro of language. Dehaene, Friederici, Poeppel, Moro and others have clearly found the MP way of putting matters tractable and fecund. This means that they have found the basic concepts engageable, and this is what a successful MP should do. Furthermore, this is no small thing. This suggests that MP “results” are of the right grain (or “granularity” in Poeppel parlance). MP has found the right level of abstraction to be useful for cog-neuro investigation and the proof of this is that people in this world are paying attention in ways that they did not do before. The right parts list will provoke investigation of the right neural correlates, or at least spur such an investigation.

Say I am right. What comes next? Well, I think that there is still some theoretical work to do in unifying the modules and then investigating how syntactic structures relate to semantic and phonological ones (people like Paul Pietroski, Bill Idsardi, Jeff Heinz, and Thomas Graf are doing very interesting work along these lines). But I think that this further work relies on taking MP to have provided a pretty good account of the fundamental features of human syntax.

This leaves as the next big cognitive project figuring out how Gs and FL interact with other cognitive functions (though be warned, interaction effects are very tough to investigate!). And here I think that typological work will prove valuable. How so?

We know that Gs differ, and appear to differ a lot. The obvious question revolves around variation: how does FL build Gs that have these apparently different features (are they really different or only apparently so? And how are the real differences acquired and used?). Studying the factors behind language use will require having detailed models of Gs that differ (I am assuming the standard view that performance accounts presuppose adequate competence models). This is what typological work delivers: solid detailed descriptions of different Gs and how they differ. And this is what theories of G use require as investigative fodder.

Moreover, the kinds of questions will look and feel somewhat familiar: is there anything linguistically specific about how language is used or does language use exploit all the same mechanisms as any other kind of use once one abstracts from the distinctive properties of the cognitive objects manipulated? So for example, do we parse utterances differently than we do scenes? Are there linguistic parsers fitted with their own special properties or is parsing something we do pretty much in the same way in every domain once we abstract away from the details of what is being parsed?[1] Does learning a G require different linguistically bespoke learning procedures/mechanisms? [2] There is nothing that requires performance systems to be domain general. So are they? Because this kind of inquiry will require detailed knowledge of particular Gs it will allow for the useful blurring of the languistics/linguistics divide and allow for a re-emergence of some peaceful co-existence between those mainly interested in the detailed study of languages and their differences and those interested in the cognitive import of Gs.

Let me end this ramble: I see a day (not that far off) when the basic questions that launched GG will have been (more or less) answered. The aim will be achieved when MP distills syntax down to something simple enough for the cog-neuro types to find in wet ware circuits, something that can be concisely written onto a tee shirt. This work will not engage much with the kinds of standard typological work favored by working linguists. It addresses different kinds of questions.

Does this mean that typological work is cognitively idle? No, it means that the kinds of questions it is perfect for addressing are not yet being robustly asked, or at least not in the right way. There are some acquisitionists (e.g. Yang, Lidz) that worry about the mechanisms that LADs use to acquire different Gs, but there is clearly much more to be done. There are some that worry about how different Gs differentially affect parsing or production. But, IMO, a lot of this work is at the very early stages and it has not yet exploited the rich G descriptions that typologists have to offer. There are many reasons for this, not the least of which is that it is very hard to do and that typologists do not construct their investigations with the aim of providing Gs that fit these kinds of investigations. But this is a topic for another post for another time. For now, kick back and consider the possibility that we might really be close to having answered one of the core questions in GG: what does linguistic knowledge consist in?



[1] Jeff Lidz once put this as the following question: is there a linguistic parser or does the brain just parse? On the latter view, parsing is an activity that the brain does using knowledge it has about the objects being parsed. On the latter view, linguistic parsing is a specific activity supported by brain structure special to linguistic parsing. There is actually not much evidence that I am aware of that parsing is dedicated. In this sense there may be aprsing without parsers, unless by parser you mean the whole mind/brain.
[2] Lisa Pearl’s thesis took this question on by asking whether the LAD is built to ignore data from embedded clauses or if it just “happens” to ignore it because it is not statistically robust. The first view treats language acquisition as cognitively special (as it comes equipped with blinders of a special sort), the latter as like everything else (rarer things are causally less efficacious than more common things). Lisa’s thesis asked the question but could not provide a definitive answer though it provided a recipe for a definitive answer.

26 comments:

  1. I find plenty to agree with in this post. I just want to emphasize that typological work is already very important for the work you mention on the similarities between phonology and syntax, at least on the phonology side.

    If you check out this year's NELS program, you'll see an abstract on conditional blocking in Tutrugbu by McCollum, Bakovic, Mai, and Meinhardt. If the data and the generalization based on it are correct, then this language directly affects how we think about the complexity of phonology, and that in turn may strengthen or weaken the computational parallelism between phonology and syntax. In the other direction, computational claims make predictions about the possible range of variation, and in order to check those predictions you need good typological data. Theoretical insights can be helpful, too, in assessing the viability of a certain computational model/class, and for syntax that's what I rely on most for now. But in phonology (and morphology) I often find the raw data --- and the papers describing this data --- easier to work with.

    ReplyDelete
  2. I agree about distilling GG to its computational primitives, but I disagree that MP is the right level of Granularity, if only because Poeppel/Embick emphasize that granularity mismatch is the real issue, not that we can find "correct" levels of granularity. This mismatch, and the relevant ways to link neuronal computation with linguistic (or, more broadly, cognitive) computation is almost certainly off the table currently. Tecumseh Fitch makes this point in his 2014 review. Some real strides in this area have been the striatal loop/prediction error results of Schultz and co, and the place cell work by the mosers, but there is nothing remotely close for language. Even Poeppel's oscillatory work on linguistic chunking is still mostly tied to perceptual systems.

    I'd also like to point out the general resistance of GG linguists to mathematical results which have direct application to their work. This directly inhibits the ability of linguistics to interface with neuroscience, which has very vibrant mathematical/computational communities. Even if the math doesn't line up perfectly, it at least fosters conversation in a way that current linguistics absolutely doesn't.

    ReplyDelete
    Replies
    1. Yes it is. But I am looking at the stuff by Dehaene as indicator that something like Merge can be usefully deployed as a complexity measure. IN fact, the old DTC looks pretty good with a Merge based G at its base. At any rate, I do not expect agreement and am happy that the idea of distilling GG to its primitives appeals to you.

      As for GG's resistance to mathematical results, I disagree. There is no resistance at all. Stabler and his crew have pretty good press in the MP community. What GGers resist are results that are not of the right grain given what we think Gs look like. So, CFGs are well studied but are not the right kinds of Gs. Moreover, many math "results" are at right angles to what GGers think are vital. This does not extend to Stabler et al who try to address issues that GGers find important. But this is not always the case. One more thing GGers rightly resist is neural imperialism. There seems to be a though out there that euro types have their hands on the right methods and models. As far as language goes, this strikes me as quite wrong. One of the thing I like about Poeppel and Dehaene and younger colleagues like Brennan and Lau is that they understand that linguists have material to bring to the table and that stubbornness on the part of linguists is often because of disdain and ignorance and highhandedness on the part of the neuroscientists. I should add that I am getting a whiff of this from the tone of your remarks in the last paragraph.

      Delete
    2. I agree that neuro-superiority is dumb and a natural science disease. Totally fair. But to be honest there is the same thing from GGers, esp in relation to people who are closer to neuro and comp ling (see Thomas's great earlier post abt that). sometimes well-deserved, but largely unhelpful. I also agree that Stabler and co's work is a major step in fixing this, as is the complexity work of Graf, Heinz, Idsardi, etc. (which is largely theory-neutral). Work by Hale, Brennan, Matchin, Lau, etc etc are so important, but this is more than 2 decades after Chomsky MP, and almost 2 after the start of stabler MG and the ensuing parsing work. One way I think Poeppel has been helpful is by creating a hypothesis (oscillatory chunking) that allows traditional debates about the role of linguistics in cogsci to be more accessible to neuro in a way that neuro can engage meaningfully. Not that they must be appeased, just that they often don't care (bad neuro!). I'm not hardcore committed to computational theory of mind, but it's the best thing we have to build a cogsci and linguists are resistant more than neuro is.

      Another facet is that many linguists and esp neurolinguists are totally unaware of the relevance of these studies and/or actively dismiss them as "just computational stuff" (again, see Graf's post). There is way more neuro work on embodied language, metaphor, basic consituency, and small-level composition than there is on, say, whether BOLD signal tracks an MG Parser vs a bottom-up CFG parser. Neuro types (esp those with a bit of CS) love the latter, but get the former, because GGers refuse to engage them in terms of what builds on their work, ie crafting a cogsci of language. The same happens across any cog discipline. It's just that GGers have the potential to bridge and (largely) don't.

      Delete
    3. Laszlo: It’s not helpful (or true) to talk about generative linguists “refusing” to engage with work in neuroscience. Cross-disciplinary work is very difficult and very risky, and sometimes two fields just aren’t at a stage where they can productively talk to each other, even with the best of intentions on all sides.

      Delete
    4. But isn't that the point of this post? Or have I misunderstood parts like "The aim will be achieved when MP distills syntax down to something simple enough for the cog-neuro types to find in wet ware circuits, something that can be concisely written onto a tee shirt."?

      Delete
    5. AD is right that it is very difficult, even if one has a decent linguistics parts list. My claim was that my kind of linguistics will have done its job if it manages to distill linguistic competence down to a small set of primitives. I believe that MP is close to having something like this. I also believe that the utility of what MP has managed to do is being recognized by some in the cog-neuro of language community. Does this mean that once delivered there will be no more cog-neuro issues? Hell no. Thee is a good distance between the correct specification of the computational problem (in Marr's terms) and the realization in wetware problem. So getting something simple we can print on a T shirt is a contribution to the overall aims of cog-neuro, though it is by no means the whole ball of wax. Indeed, it may not even be the hardest part of the problem, which is why we may solve it first.

      Delete
    6. Totally on board with that. But to me, having the right primitives will not help anyone if they are not amenable to neurobiological work. This matters when, say, people characterize primitives like Merge as set-building, while neuro (and CS) results show that set-building is complicated and expensive compared to other ways. Not that the primitives aren't correct, but that there's a mismatch. Whose job is it to resolve them if not the people proposing the primitives in the first place?

      Delete
    7. They show this? Really. Could you give me some references. If this were so it owuld be truly impressive. So sets are hard nut, say, graphs are not?

      Let’s say you are right. Then the job will be to frame the same results in a difgerent more amenable idiom. I cannot wait for that day to be upon us. But so far as I know, we have cery little idea of how brian circuits actually compute. We know where the computations might be going on, but we have no account of many of the operations we believe are being co puted. Dehaene discusses this in a paper recently in Neuron (I think). At any rate, I will personally assume full responsibility for solving this conundrum when it arises. May its day come soon.

      Delete
    8. @Norbert: Laszlo might be thinking of something else, but if you look at the algorithms literature and various programming languages, sets are indeed rare and definitely not a primitive data structure.

      When sets are used, they often aren't sets in the mathematical sense. In Python, for example, sets are implemented as hash tables. That's a fairly complicated object. It also means that Python sets are ordered, but the order is arbitrary and cannot be relied on. And they cannot be nested, because sets themselves aren't hashable. So if you want nested sets, you need the FrozenSet data type, which adds additional complexities.

      When it comes to efficiency, sets are much faster for membership tests than lists (because they are hash tables), but iterating over a set takes longer than iterating over a list. But the membership advantage only holds for large sets, lists with less than 5 items are usually faster because sets come with a certain overhead. Since Bare Phrase Structure sets never have more than two members, you'd be better of using a list (which would also get you around the problem with sets not being nestable).

      Whether graphs are hard to implement depends on your specific encoding. A list of tuples is certainly easier to create and manipulate than sets. Highly efficient graph encodings will be more complex.

      Trees, when represented as Gorn domains, are just sets/lists of tuples of natural numbers. Btw, encoding trees as Gorn domains also immediately gives you linear order between siblings; a comparable encoding that makes linear order impossible to reference is harder to implement, not easier.

      Delete
    9. Addendum: Somewhat ironically, when sets aren't implemented as hash tables, they're implemented as trees (e.g. binary search trees).

      Delete
    10. If I recall, there was a couple huge threads here on why (or not) MP is done in sets. I think some references were there? At least from the CS side.

      On the second point, there is a huge literature on brain computation at any level you choose. Here is a link to an annotated database of journals and special issues, http://home.earthlink.net/~perlewitz/journals.html
      and it's growing every day.

      Delete
    11. This comment has been removed by the author.

      Delete
    12. @Thomas. As you know, most flavors of Minimalist syntax assume that the sets in question have at most two members. It is trivial to construct an efficient implementation of the basic set operations for sets with <= 2 members, so I don't see how it's relevant that sets in general are a relatively complex data structure to implement. It's not as if Minmalist syntacticians need efficient set membership tests for arbitrarily large sets of syntactic objects.

      I do agree that a lot of Minimalist syntacticians seem to think that the 'sets vs ordered pairs' issue is vastly more important than it actually is. But I don't think that the (admittedly idiosyncractic) decision to insist on sets as the primitive raises any real implementation problems. If I wanted to implement Minimalist-style 'sets' in Python, I would just implement them as 2-tuples and ignore the ordering information thereby encoded. Easy.

      By the way, sets are not hashable in Python simply because the Python stdlib happens to implement them as mutable objects (lists are also unhashable for the same reason). It has nothing really to do with properties of sets as such. If Python treated sets as immutable (as it does e.g. strings, numbers, tuples), then there would be no difficulty in hashing sets of hashable objects. So e.g. Haskell's (immutable) Data.HashSet is itself hashable.

      Delete
    13. (Well, actually, Data.HashSet in Haskell is deprecated in favor of Data.Set, which doesn't use a hashtable implementation; but in any case it is immutable and allows sets of sets.)

      Delete
    14. @Laszlo:
      Thx for the overly extensive bibliography. My info concerning the current state of the Euro art come from the paper by Dehaene and Co that I discuss here:(http://facultyoflanguage.blogspot.com/2016/09/brain-mechanisms-and-minimalism.html). They observe that how to implement algebraic and hierarchical structure neurally is currently unknown. Are they wrong? Do we have a good understanding of how this is done. I should add that even if one can do both of these do we have a good idea how wetware implement recursion? Given that linguistic objects are algebraic, and hierarchical and recursive it would seem that how neurons do this sorts of things is currently unclear, at least if Dehaene is to be believed. None of this is to say that people are not working on these issues, but there is a difference between doing so and having actually made progress relevant to what the behavioral studies show must be inside the brain.

      Delete
    15. @Alex: I took Norbert's question to be about sets VS graphs in general. And there I do find it insightful to look at how sets are actually implemented because it really shows that the idea of sets as intrinsically simple objects does not hold. When Minimalists say "set", they don't mean sets, at best they mean a very specific subcase of sets. So why say "set" in the first place?

      Your implementation of sets as tuples is not limited to sets with just 2 members, it works for any finite set if you don't care about efficient membership tests (if you have way of lazily instantiating a list, even some infinite sets can be implemented this way). But then we're back to the old question whether syntactic sets are unordered or the order simply isn't used. For some reason, the latter position is unpopular. A faithful implementation of syntactic sets that has all the properties syntacticians want and none of the properties they do not want would be hard.

      (As for Python sets not being hashable, I briefly mentioned frozensets as the immutable, hashable, and thus nestable counterpart to sets. That said, I've never found a real-world use case for them, I can't even think of a good reason to use sets as keys. *shrug*)

      Delete
    16. @Thomas. My point re Python was that the implementation details of the Python stdlib have nothing to do with what we're talking about here. Bringing up these details was potentially confusing to people who aren't programmers, as it appeared to suggest that there's some inherent difficulty with implementing sets of sets, when this is not the case.

      So why say "set" in the first place?

      It's a bad terminological decision. Bad terminology abounds.

      A faithful implementation of syntactic sets that has all the properties syntacticians want and none of the properties they do not want would be hard.

      There's nothing at all hard about it. It's spreading FUD to suggest that Minimalist syntacticians' half-assed use of the term 'set' somehow presents a deep puzzle about how syntactic structures are to be encoded for computational purposes. You can just use an ordered pair and ignore the order. Whether or not that's a "faithful" implementation of sets with <= 2 members is a largely philosophical question.

      Delete
    17. Laszlo said that people characterize primitives like Merge as set-building, while neuro (and CS) results show that set-building is complicated and expensive compared to other ways. Norbert found that surprising, so I added a few examples why sets can be regarded as complex data structures in CS rather than primitives. We can quabble about the details of what that entails for Minimalism, but if the compromise position is "we don't mean sets when we say sets, and it is okay to implement them as ordered structures that do not satisfy idempotency", then Laszlo has a point that phrasing Merge in terms of set-formation makes bridging the gap between disciplines harder rather than easier.

      Delete
    18. As long as the 'sets' are created and manipulated via an abstract interface (without access to the underlying tuple), they can satisfy indempotency etc. After all, you wouldn't say that a Python hashtable doesn't really have the properties of a hashtable because there's an underlying array of buckets.

      The idea that Chomsky's use of 'set' is a barrier to cross-disciplinary work seems a little specious to me. Do you know of any neuroscientists who would like to show that generative syntacticians are wrong? They probably outnumber the ones who'd like to do the opposite! So, great, they can show us the evidence that syntactic structures are encoded using (say) ordered pairs rather than unordered pairs as primitives. I'm sure every syntactician would be surprised and delighted at the result.

      Delete
    19. Norbert’s point was less subtle. He knows of nothing indicating that neuro types know how to reprent hierarchy or algebraic structure neurally. Here he is just echoing Dehaene. So the idea that this is what is getting in the way of collaboration seems implausible. I added that thereare currently no ideas of how to integrate unbounded hierarchy (recursion) neurally either. So, the debate, though heated, seems to me to take as a premise what nobody right now has the fainest idea of how to execute, at least of Dehaene is to be believed (and I believe him).

      Delete
    20. @Thomas and Alex: Yeah, sorry, i didn't want to go into a debate about WHAT the best characterization is, only that such conversations are crucial, yet their outcome inevitably runs into the Granularity Mismatch Problem regardless of the answer.

      @Norbert: Agreed on the issue of hierarchy and recursion, but unless I misunderstand, if you have Int./Ext. merge, or merge/move a la Stabler, you get both for free? One tentative hypothesis is that they're encoded by phases, which Nai Ding showed in 2016, and which Elliot Murphy has a Biolinguistics article on.

      The really thorny issue is what level of neural computation we wanna work at. Single-neuron? spike-train? coupled population? dendritic tree? Molecules a la Gallistel? All have computational properties which might encode set-like objects (see Izhikevich's dense but very instructive book, or Christof Koch's kinda dated but still very helpful "Biophysics of computation", just for single neurons!). I agree that it's a mess, because there's not much cross-talk about exactly WHICH level we wanna work with, but I think that's the interesting stuff! This mess might be the super hard problem you were talking about, though Eric Hoel from Columbia argues(http://www.mdpi.com/1099-4300/19/5/188) that it isn't really a problem. Tbh I also don't know to fix it, nor do i know how much of the onus is on GG syntax to save the day here. How do you envision ways neuroscientists could help MP linguists out?

      Delete
    21. Also, see a very cool recent talk on a similar subject by Alessandro Tavano at MPI on how varying constituent size is tracked by oscillation phase in the same way as Ding showed.
      https://www.youtube.com/watch?v=4U0iGykpF-o

      Delete
  3. This comment has been removed by the author.

    ReplyDelete
  4. But if Merge is generally thought to be the primitive, and constantly advocated to be so, what's there left for MP to do? Shouldn't that mean the goal of providing the parts list has actually been reached?

    ReplyDelete
    Replies
    1. MP provides the parts list if it shows that it can recover much/most of the GB facts. This is the test of an MP: to show how it gives you something like GBish universals properly understood. Chomsky does sopped of this when he shows that Merge properly understood delivers structure dependence, reconstruction etc. I think that one needs to go further (as I've suggested in various recent posts concerning the Merge Hypothesis and the Extended Merge Hypothesis. Once this has been done, and I believe that we can see a light at the end of this tunnel, though there are some serious parts of GB that are proving hard to "reduce"/"unify," then MP will be over and we will have shown what an FL must look like to derive GB. The FL will have linguistically proprietary parts and domain/cognitively general parts. This division will provide things for cog-neuro to look for.

      We might even go further: and show how syntactic and phonological and semantic structure are related. This is what Idsardi, Pietroski, Heinz, Graff etc have been doing.

      If we got these two things then the classical Chomsky project, IMO, would be over. We will have shown the unifying structure underlying G phenomena and separated G structure from Cog structure more generally in the operations of FL. This would then provide the request parts and operations list.

      Are we there yet? Not quite. Are we close? I believe we are.

      Delete