Monday, November 12, 2018

Guest post by William Matchin on SinFonIJA part 2

Last post, I discussed the main thrust of my comments at the roundtable. Now I raise some of the points raised by the keynote speakers and my thoughts on them.

Cedric Boeckx

The standard perception of an outsider looking at formal linguistics (e.g., a psychologist or neuroscientist) is that rather than pursuing fundamental questions about the nature of the mind/brain, they are really philologists interested in the history and grammar of particular languages divorced from psychology and biology. Cedric’s talk and comments throughout the conference explicitly made this point – that linguists have overwhelmingly focused on the analysis of particular constructions and/or particular languages using the tools of generative grammar, but not really addressing the foundational questions driving the field: the basic architecture of the faculty of language (Humboldt’s problem), the relative contributions of its genetic and environmental components (Plato’s problem), how it is implemented in the human brain (Darwin’s problem), and how it evolved (Darwin’s problem). Readers of this blog are certainly aware that Norbert has repeatedly made this point (i.e. the linguist/languist distinction). Cedric’s presence at the conference amounted to being an evangelist for biolinguistics and a return to these core questions, with dire warnings about the future of the field (and job prospects) if linguists do not.

There is a lot of truth in these statements. From my perspective, I rarely see somebody in linguistics (or psycholinguistics) explain to me why exactly I as a neuroscientist should care about this particular analysis or experiment regarding some construction in some language.Neuroscientists and those studying aphasia or other language disorders repeatedly make this point. Of course there are relevant connections, in that analyses of particular phenomena (should) inform the general theory, which is ultimately a theory of biological architecture. When aspiring to communicate with other cognitive scientists, these connections need to be made explicit. Cedric would have the philology essentially stop wholesale – while I do not agree with this extreme view, I do agree that much of this work seems unnecessary to the broader goal of advancing the theory, especially in absence of even attempts to make these connections.

I would like to add two points.

First, this issue is hardly unique to linguistics. Largely the same issues hold in other fields (e.g., why exactlydoes it matter which lesion distributions are associated with a particular syndrome, either for patient care or theoretical development?). But there isn’t the convenient languistics/philology label to apply to those that seem distracted from the central questions at hand. In some ways this is a credit to linguistics – the philosophical foundations of the field have been laid explicit enough to point out the difference between philology and linguistics. Neuroscientists are ostensibly interested in asking fundamental questions about how the brain works. But I think this widespread myopia arises in part because of sociology – we are constantly interacting with our peers, feeling compelled to react to (and express sympathy for) what they are doing, and it is far easier for a new graduate study to perform a neuroimaging experiment and publish a paper following up on what somebody else has done than to reflect on the basic nature of the mind and brain. For good or bad, our field rests on interaction and personal connections: being invited to conferences, having reviewers knowledgeable of our work and sympathetic to it, asking for letters of recommendation. There are few worldly incentives for pursuing the big questions, and this cuts across all of the sciences.

Second, at least within the GG community (as exemplified by SinFonIJA), people really doseem to care about the fundamental questions. The people who gave keynote lectures whose careers are devoted to linguistic analysis within the generative tradition (Ian Roberts, Lanko Marušič, and Tobias Sheer) all presented on topics about the fundamental questions listed above. Every senior linguist I talked to at the conference clearly had thought and reflected deeply on the fundamental issues. In fact, one linguist mentioned to me that the lack of focus on fundamentals is not from lack of interest but rather distraction (in the manner described above). Many of the young people at the conference buzzed Cedric and me with questions and seemed quite interested to hear what we had to say. Apparently these issues are at the forefront of their mind – and it’s always telling to get a sense of what the young people are thinking, because they are the future of the field (that is, ifthey get jobs).

Overall, I agree with much of what Cedric said, and there were quite similar currents in both of our talks. The main question that I think linguists should be asking themselves is this: what do I really care about? Do I care about the philosophical problems outlined above? Or do I care about analyses of specific languages, i.e. philology? If the answer is the former, then I very much recommend thinking of ways to help reconnect linguistics with the other cognitive sciences. If the answer is the latter, then I don’t have much to say to you except, to quote Cedric, “good luck”.

I love languages – I see the appeal and the comfort of leaving the heavy theoretical work to others. I have spent much of my life learning languages and learning about their corresponding cultures. But that is not nearly enough to sustain the field of linguistics much further into the modern age, in which funding for the humanities is being dramatically cut in the U.S. and other scientists, potential collaborators who will still be funded, are very much disenchanted with generative grammar.

One last comment about Cedric’s talk. While we agree on the point above, we disagree about what is currently being done in other fields like language evolution. His perspective seems to be that people are making real progress, and my perspective echoes Chomsky – skepticism of much of this work, particularly with respect to evolution. I think that Cedric has a bit of a “grass is greener” syndrome. However, I do not mean to be completely pessimistic, and the work by people like Alec Marantz, Stanislaus Dehaene, Christophe Pallier, Greg Hickok, John Hale, Jonathan Brennan, and others presents promising connections between neurobiology and linguistic theory. As reviewed here on FoL, Randy Gallistel has been highlighting interesting findings in cellular neuroscience that inform us about how neurons actually store representations and perform computations over them. 

Ian Roberts

As I mentioned above, Ian Roberts gave an interesting keynote lecture highlighting the viability of a single cycle that underlies all of syntactic derivation. It is this sort of work, reinforcing the basic components of the competence model from a forest (rather than a trees) perspective, that presents a tantalizing opportunity for asking how such properties are implemented in the brain.

However, I was somewhat disappointed in Ian’s response to my presentation calling for such an integration. He pointed out the viability of a purely Platonic view of formal linguistics; that is, that the study of linguistics can be perfectly carried out without concern for integration with biology (to be clear: he did not endorsethis view, but merely pointed out its viability). He also seemed to dismiss the absence of invitations for interaction to formal linguists from the other cognitive sciences as flaws in those fields/individuals. The underlying thread was something like: “we’re doing perfectly fine, thank you”.

I do not disagree. One cando platonic linguistics, and cognitive scientists areunfairly dismissive of formal linguistics. But this misses the point (although perhaps not Cedric’s). The point was: assumingwe want integration of formal linguistics with other fields (and I think almost everyone agrees that we do, at least given my impressions from the conference), onecritical obstacle to this integration, that linguists are in a very good position to address, is how competence relates to performance (or, the grammar-parser relation) on a mechanistic level. 

Ian is clearly very thoughtful. But I was worried by his response, because it means he is missing the writing on the walls. Perhaps this is in part because the situation is better in Europe. The cognitive revolution was born in the United States, and I believe that it is also the place of its potential deathbed. The signs may be clearer here than in Europe. Altogether, if the orcs are about to invade your homeland, threatening to rape and pillage, you don’t keep doing what you’re doing while noting that the situation isn’t your fault because you were always nice to the orcs. Instead, you prepare to fight the orcs. And if there is one thing that Cedric and I heartily agree on, the orcs are here.

Tobias Scheer

Tobias’s main point at the roundtable was that there is still a lot of work to do on the competence model before it can be expanded into a performance model of online processing. This is perhaps the best counter to my argument for working on the performance model – that it’s a good idea, but that there are practical limitations of the sort Chomsky outlined in Aspects that have not gone away.

As I discussed in my previous blog post, I often find myself remarking how important this issue is – language behavior is so complicated, and if you add on the complexities of neuroimaging experiments, it is hard to really make anything coherent out of it. The competence performance distinction has been invaluable to making progress.

The main question is whether or not it is possibleto make progress in working on performance. With respect to language, the competence-performance distinction is an absolutely necessary abstraction that allows for focus on a small set of possible data that still allows for analyzing a wide range of constructions across the world’s languages and for theoretical development to occur. The disagreement concerns whether or not it is possible at this time to move beyond this particular abstraction to other, slightly less focused, abstractions, such as a model of real-time language processing that can account for simple constructions and the acquisition of such a model.

This is an empirical assessment. It’s pretty much impossible to understand what the hell people do mechanistically when they perceive a garden-path sentence (much less interpret a neuroimaging experiment on garden-path sentences). But, in my view, it is possible to largely preserve the virtues of the competence-performance distinction with respect to limiting the relevant set of data by only aspiring to develop a performance model for fairly simple cases, such as simple transitive active and passive sentences.

In addition, there might be something (a lot?) to be gained about thinking in real-time that could explain troublesome phenomena from the traditional standpoint of linguistic theory. For instance, I don’t know of any better approach to the constituency conflicts Colin Philips pointed out in his 1996 dissertation and 2003 LI paper than the idea that sentences are built incrementally, which naturally accounts for the conflict of constituency tests[1]. There may be many more such phenomena that could be addressed from the standpoint of real-time processing that help simply competence model itself. How do you know until you try?

Ianthi Maria Tsimpli

Ianthi’s talk at the conference presented data illustrated differences in language behavior between monolingual and bilingual speakers.

Ianthi Tsimpli’s comments at the roundtable and much of her other work points out that there are really fantastic opportunities to make more direct connections between work on developmental disabilities and theories of universal grammar, i.e. the genetic contribution of the language faculty. Ianthi was one of the main scientists who studied Christopher, the savant who was severely intellectually disabled yet able to learn many languages fluently. She summarized for me some of the major findings on Christopher regarding British Sign Language (BSL), which I believe illustrate the biological (and likely genetic) autonomy of language from other aspects of cognition.

There are three main facts.

(1) Christopher did essentially as well as L2 learners in learning BSL, despite his severe mental handicap. This is important because it reinforces the notion (that I believe is not really in need of reinforcing) that sign languages are like spoken languages in all the relevant psychological aspects, including the distinction between language and other cognitive domains, but more importantly that language is something differentfrom other things, potentially with distinct biological underpinnings.

(2) The one domain where Christopher struggled is on classifier constructions, which rely heavily on visual-spatial abilities that Christopher is already impaired on. This is not a very interesting except for the fact that it clarifies the nature of what may seem like important differences between speech and sign – if you cannot process certain rapid formant transitions because of a disorder of your auditory periphery, you probably will not learn consonant speech sounds very well, but this is merely a barrier to processing speech, not an indicator that the deeper levels of organization between speech and sign are fundamentally different. The same with classifiers – they are exciting properties of sign language that clearly rely heavily on the visual-manual nature of sign languages, but this does not mean much about their more abstract organization in the language system.

Again – there is something about languageitselfthat is not reducible to sensory-motor externalization of language.

(3) Christopher essentially ignored the iconic properties of signs when acquiring the lexicon, whereas hearing L2 learners are quite sensitive to them. This underscores that language acquisition, at its core, really doesn’t care about iconicity, and indicates that while the study of iconicity may be interesting to some, it is orthogonal to the essential properties of language, which are its abstractness and arbitrariness. This fact has been clearly lain out for decades (see e.g. Bellugi and Klima’s paper “Two faces of sign: Iconic and abstract”), but again, is reinforced by Christopher’s remarkable ability to learn BSL effectively while ignoring its iconic elements.

In the roundtable discussion, Ianthi pointed out that there are problems waiting to be addressed that would greatly benefit from the insights of generative grammar. To me, these are the golden opportunities – there are a wide range of disorders of language use, and working on them presents opportunities for collaboration with biologically-oriented fields that generally have much greater funding than linguistics (i.e., language disorders, neuroscience, genetics). I recommend the book chapter she wrote with Maria Kambanaros and Kleanthes Grohmann[2](edited by Ian Roberts), which discusses in more detail some of this work and highlights the possibilities for fruitful interaction.

Franc (Lanko) Marušič

Lanko Marušič’s talk reported behavioral experiments attempting to explain the roughly universal adjective ordering preferences addressed in the cartographic approach of Cinque. The idea was that if preferences for certain features (such as color, size, shape) come from non-linguistic cognition, then one should find the same preferences in non-linguistic cognition. Thus he reported behavioral experiments that attempted to map out the salience of these features of experimental subjects, ascertaining whether the results agreed with the linguistic ordering preferences. The experiments themselves were a bit complicated and difficult to interpret, as there were many possible confounding variables that the experimenters attempted to grapple with (again, illustrating the deep pitfalls of investigating performance generally). However, this was an experiment that certainly was interesting to me and is exactly the type of thing to interest non-linguists in linguistics.

Outside of the conference, I spent time talking with Lanko. In our informal conversations, he mentioned to me the possibility of attempting to localize syntactic representations in the brain by building off our understanding of the interfaces that syntax must deal with: the conceptual-intentional (CI) and sensory-motor (SM) interfaces. I.e., if language is accurately captured by the Y-model, then syntax should be in the middle of CI and SM. This is a great idea, and happens to be a cornerstone of the model I am currently developing with Greg Hickok. This illustrates that there can in fact be value for neuroscience taken from linguistics – not at the level of a particular construction, but at the high-level of broader components of linguistic theory. Like the Y-model, cyclicity, etc.

Closing thoughts

Most of the conference presentations were not concerned with the questions I addressed above. Most posters and talks addressed standard questions discussed in linguistics conferences – and these presentations were, for the most part, excellent. I was very happy to be part of this and to remind myself of the high quality of work in linguistics. One of the virtues of linguistics is that it is highly detailed and reflects the health of a mature field – one does not need a general introduction to acceptability judgments, the competence/performance distinction, etc. to understand the talk. These shared underlying assumptions allow for very efficient presentations and discussions as well as progress, at least in terms of analyses of specific constructions.

In some sense, as discussed above, the crisis that I (and Cedric) perceive in linguistics, in the context of the other cognitive sciences, is unfair to linguistics – other fields suffer from the same problems, and there are plenty of healthy aspects to the field. Linguists in general seem more thoughtful about the underlying philosophical issues of science than those in other fields, as evidenced by my conversations with the conference attendees (and particularly keynote speakers).

On the other hand – the crisis is obviously there, deserved or not. I spend much time talking to linguists about the job prospects for graduate students. It seems to me that what linguistics isdoing to address this issue is to shift from theoretical focus to working on issues that have a more superficial appeal to other fields, or that can provide training for jobs outside of linguistics (i.e., computational modeling). This might be helpful for getting jobs, but I worry that it essentially hastens the abandonment of the core questions of interest underling generative grammar: Humboldt’s problem, Plato’s problem, Broca’s problem, Darwin’s problem.

In my view, there is a fantastic opportunity at hand: a preservation of these core philosophical problems as well asjobs. And this is working towards a performance model. This project, broadly construed, could include work along many dimensions, including much of the current kind of work that is being done: understanding the appropriate analysis of constructions/linguistic phenomena from the vantage point of incremental derivation, in the style of Phillips’s (2003) analysis of constituency conflicts. With respect to my world, it could mean developing a more realistic understanding of how linguistic theory relates to neurons and genes. In-between, it could involve the development of a plausible parser/producer that incorporates a syntactic theory (work that Shota Momma is currently pursuing).

At any rate, that’s my two cents. SinFonIJA was a lot of fun, and I cannot thank Marta, Ewa, and Mateusz enough for inviting me and being quite generous in their accommodations. At some point in the near future, conference proceedings will be published in the Journal of Polish Linguistics (edited by Ewa Willim) – stay tuned for what I hope is a very interesting set of papers.


[1]Phillips, C. (1996). Order and structure(Doctoral dissertation, Massachusetts Institute of Technology). Phillips, C. (2003). Linear order and constituency. Linguistic inquiry34(1), 37-90.

22 comments:

  1. It seems to me that what linguistics is doing to address this issue is to shift from theoretical focus to working on issues that have a more superficial appeal to other fields, or that can provide training for jobs outside of linguistics (i.e., computational modeling). This might be helpful for getting jobs, but I worry that it essentially hastens the abandonment of the core questions

    I agree that we're in the middle of a shift, but I'd say that will help with the core questions, rather than pushing them aside.

    Although you never say it explicitly in your post, my one-line summary would be that linguists care about the connections, but few of them actually work on them. Why is that? I'd say it's because generative grammar is not well-suited to building those bridges. Its strength is the use of fine-grained mechanisms that can make very detailed predictions. It excels at the study of language because it's custom-tailored to that very purpose.

    But that specificity makes it harder to build bridges, not easier. Neuroscience, psychology, cognitive science in general are much coarser in their models, and the granularity mismatch between that and generative grammar is the root cause for why the relation is so strained and exchanges are rarely fruitful.

    Computational methods provide a more abstract and content-agnostic perspective, but this is exactly what makes it easier to find common ground with other fields and address the big issues. If a phonologist presents a MaxEnt learner for modeling phenomenon X, that builds a bridge to other cognitive domains where MaxEnt learners are used. If somebody does game-theoretic pragmatics, they can immediately collaborate with anybody else that does game theory, and again they can connect their work to more general ideas about human reasoning. There's many other examples like that.

    Outside of phonology, generative grammar has been mostly at the sidelines for this kind of work because i) few linguistics departments are set up to train students in these domains, and ii) this kind of work rarely sheds light on the issues that are the bread and butter of linguists --- A vs A' movement, island effects, and so on. Even though linguists are interested in the big picture, that's not what gets published in linguistics journals, and that creates a natural incentive not to "waste" curriculum time on a broader computational education.

    Students pushing back and demanding more computational training for job security can only improve the situation. After all, it is a bit absurd that we always tell our students how important language acquisition and the poverty of stimulus is to linguistics, yet we do not teach them anything from the last 50 years of machine learning or grammatical inference.

    ReplyDelete
    Replies
    1. I think computational methods are definitely part of a way to improve the connection. E.g. John Hale and John Brennan's work. What I tend to see, though, is not that kind of thing but rather computational training without a focus on those deeper questions.

      So I suppose I should amend my comments. Computational methods are important, but I'd like to see them connected up with the foundational questions of generative grammar.

      Delete
    2. The foundational questions of generative linguistics are computational, and this is what gives them staying power, so linguists should explicitly be linking what they say to computation. Chomsky has been very clear on this, like in this debate with Piaget from 1980:

      that is exactly what generative grammar has been concerned with for twenty-five years: the whole complicated array of structures beginning, let’s say, with finite-state automata, various types of context-free or context-sensitive grammars, and various subdivisions of these theories of transformational grammars - these are all theories of proliferating systems of structures designed for the problem of trying to locate this particular structure, language, within that system. So there can’t be any controversy about the legitimacy of that attempt.

      As for the learning question, full ack. Jeff Heinz and I recently wrote a reply where we discuss the perils of learning and language if we don't take computation seriously.

      Delete
    3. @Wiliam: Yes, a lot of computational work does not address the deeper questions, but as you lamented yourself, the same is also true for a lot of generative work. A shift towards computation isn't magically gonna make everybody work on the cognitive questions, but it gives useful tools to those that want to.

      Even very application-oriented introductions to NLP cover important topics for linguists, and it doesn't take much digging to find the connections. For example, word embeddings aka vector space semantics are all the rage now. In those models, each word gets assigned a vector in a high-dimensional vector space depending on how often it occurs with other words in a corpus. That doesn't sound like anything linguists should care about, but experiments suggest that these models behave similar to humans in word association tasks. So there's something there about how the mental lexicon is organized, and it raises deep questions about how compositional semantics can be represented in terms of operations on vector spaces. And as a nice bonus, the whole approach also requires you to master mathematical concepts like tensor products, which also play a centrol role in physics --- so all of a sudden you have something to discuss with your friendly neighborhood physicist at the next cocktail party.

      Delete
  2. One thing I'd add is that Chomsky's revolution was essentially a revolution in philosophy of mind and there's no doubt that his mentalist project has been sidelined by people using GG only for language description, but I think this is only half the story, in that the basic methodology of GG is in some ways responsible for the disregard for the mind that we're talking about.

    If you look at Berwick et al.'s 2011 'PoS Revisited' paper, for example, you get a very clear statement of the idea that generative linguistics is all about describing attained adult competence with a view to determining the nature of its computational component, all theories of acquisition, development, cognition and evolution being interesting but methodologically 'secondary'.

    Setting aside the epistemic issue of whether it's actually possible to have a theory of what is attained without a theory of how it's implemented, if you take this approach, you legitimize ignoring everything except language description because, in describing language, you are describing the mind, so long as you've paid lip service to Plato's Problem.

    I think this makes the linguist/languist distinction a little problematic because, so long as language description is held to be a cognitive project in its own right, we can argue about people's motivations, but there is essentially nothing besides exposition that distinguishes GG linguistics from languistics. Yes, GG at heart says that you should describe languages with a more ultimate cognitive aim, but it also says that you can achieve this aim while doing nothing but describing languages.

    Thus, we do get some languists who rehearse Plato's Problem and Darwin's Problem only as a kind of ritualistic invocation, but at the same time, other linguists rehearse PP and DP because they're serious about linguistic nativism, yet the importance of these problems has been recognized for decades and GG has done basically nothing with them - its only contribution to the theory of acquisition is still its suggestion of what is not learned, and its only contribution to the theory of language evolution is its suggestion of what is not evolved by bread and butter natural selection.

    Now, one response to this pessimistic picture is that I'm being unfair because we're dealing with different levels of description and it's too much to demand that people target the computational system and its implementation in one go. Thus, you end up with the Ian Roberts position that it is viable to have a 'platonic' linguistics.

    But let's be honest, for a start. This isn't any genuine philosophical Platonism - we're working on a theory founded upon an interest in the mind, bounded by conditions on what the mind can contain - 'Platonism' here is just a polite term for having no interest in cognitive science because it's all brain stuff and stats.

    I do think that avoiding interdisciplinarity is just fine, taken so far - it has after all been our strategy for many years and we have managed to get a lot right in GG. But taking an essentially metaphysical approach to a psychological project has given us a theory that is, I think, absolutely rife with metaphors for things that go on in the brain, many of which may be quite misleading in ways that we don't yet realise.

    Crucially, I think this means in the end that we're kidding ourselves if we think that GG is simply describing reality at an abstract level in a way that we are not yet ready to reduce to the physicality of wetware and so on. Rather, I think we fundamentally do not know how to make GG into a theory that is fit for reduction, regardless of when it will be possible, because we haven't put in the necessary theoretical work (never mind any of the brain stuff), and this is largely because it's a basic principle of GG that it's always someone else's problem.

    ReplyDelete
  3. I think this pair of posts runs together two different issues that generative linguistics faces / might face. One concerns institutional and funding-related matters. The other concerns scientific matters. They are not mutually exclusive, of course, and they may in actuality both end up as current threats, to one degree or another, to the future of linguistics. But I think that keeping them separate in discussions like these makes the discussion more lucid and useful.

    I, personally, have never found the "do X kind of work otherwise the folks with the big money won't pay attention to you" argument particularly compelling. If I can't do the kind of research I want because the world has conspired to make that impossible, and/or because I have failed to convincingly argue in front of said world for the merits of my kind of research, the solution to that is emphatically not "do another kind of research, then," as far as I'm concerned. You might think that this is faculty-member-with-a-permanent-position privilege talking, but in fact this has been my personal position ever since my first days in grad school (as those who were there with me can attest). More specifically: suppose someone wants to be a theoretical syntactician, but cannot find an academic position doing that. Sure, they can do more cognitive-neuroscience-friendly things instead; they can also just go work for Google. I wouldn't begrudge anyone either of these, nor any other such choices. There is no moral imperative that says Thou shalt do linguistic research! I love my field, and I'd like to see it thrive. But I love my field; I don't necessarily love alternative versions of it that are driven by practical contingencies.

    If you accept the above (and you might not), then what remains is the question of whether attention to current trends in broader cogsci research is critically required for the health of linguistic theory. My personal opinion here is that it's really, really not. In fact, a lot (not all; but a lot) of the linguistic work in the last 10-15 years that has been driven by overtures to non-linguistic cognition has been, in my estimation, some of the worst linguistic work I have seen. Even if my assessment is right, of course, this does mean that it has to be this way. But getting back to my main point, I'm not sure what the particular crisis in linguistic theory is that paying more attention to cognitive neuroscience will solve. (This is not to say that I think everything is hunky dory in linguistic theory; as readers of this blog know, I am quite the grouch! It's just that for the problems that I'm talking about, taking a page from cognitive neuroscience's book is not what's likely to produce a solution, imo.)

    ReplyDelete
    Replies
    1. That's a good point, which applies to most of the work in the humanities - presumably William wouldn't advocate that Sumerian scholars start working in robotics because there's more money in it. And I certainly hope our society keeps funding humanities research. But I think the issue under discussion here is whether generative linguistics is best viewed as a branch of the humanities (like, say, language documentation, or Sumerology), or as a cognitive science. It usually identifies in the latter way, but as Callum pointed out, this professed identity is not in general consistent with the practice in the field (to quote from his comment, "there is essentially nothing besides exposition that distinguishes GG linguistics from languistics"). I agree that that doesn't affect the fact that this practice may be a fun intellectual activity that may be worth pursuing in its own right (like other branches of the humanities), but then it would also make sense for our society to fund it as such, rather than, say, through the National Science Foundation in the US.

      Delete
    2. Hmm, not sure I see the connection you're making between my institutional/funding-related point, and science-vs.-humanities affiliation. I think current generative linguistics is a far better exemplar of cognitive science than most of current cognitive psychology and cognitive neuroscience, which might be better described as "theory-free cognitive data gathering." Linguistics (when done right, anyway) involves theory-driven identification of interesting empirical domains in which to test and sharpen hypotheses.

      I guess what I'm saying is I disagree with Callum's point: I think generative linguistics is definitionally cognitive science. Let's say someone publishes a paper in LI or NLLT or whatever that pits current theories of x,y,z against the behavior of x,y,z in language L, seeing what works and what needs to be refined. And let's say this paper doesn't even have the "fig leaf" exposition that Callum is talking about. Because the grammar is a mental object, this paper is still a way better exemplar of cognitive science than the run-of-the-mill "I ran a multi-factor regression and am able to capture 45% of the variance in my dependent measure!" cogsci paper. (Needless to say, the fact that theoretical linguistics typically doesn't involve p-values or statistical analysis is a virtue, not a fault. One needs statistical power when one has nothing else to stand on.)

      Delete
    3. OK, fair enough - it's reasonable to disagree with Callum's point. In fact, I think I agree with both of you to some extent here - work in generative grammar can certainly be cognitive science, but not all of the work that identifies as generative actually is (this is the point that Norbert has made too). But I thought the point you were making in your earlier comment was that regardless of whether it's cognitive science or not, generative linguistics is worth pursuing (and funding) because it's an intellectually stimulating activity.

      Delete
    4. @Omer & Tal: Linguistics clearly occupies a very peculiar niche and I'm on board with generative theory being a much better example of science than much of what passes for psychology, but I don't see that GG is any kind of cognitive science. Sure, it's a humanistic science that by definition promises that its discoveries are somehow discoveries about the mind - and it would be foolish to doubt that - but its standard practice is to catalogue discoveries that are in need of a cognitive treatment without ever getting down to it. It sure likes to seem to be making substantive cognitive claims by talking about grammars as mental objects, but this is just repeating the promise that arises out of Plato's Problem; it is nothing more than a promise because the theory doesn't tell you how to make any more of it.

      If I were to label the field myself, what springs to mind is the term that Fodor used for his first book on the LOT: speculative psychology. As he put it, "it [isn't] quite philosophy because [it's] concerned with empirical theory construction. It [isn't] quite psychology because it [isn't] an experimental science. But it [uses] the methods of both philosophy and psychology because [it's] dedicated to the notion that scientific theories should be both conceptually disciplined and empirically constrained." I agree with Fodor that speculative psychology deserves a place at the table but how many years of speculation does it take before we can start cashing things out?

      Here, I stand by what I said above: the problem is not that we're having a hard time finding the correlates of generative theory in brain circuitry, it's rather that this task is not something that can even be sensibly formulated because it has never been on GG's agenda to make it sensible. It's a strange cognitive science that isn't interested in the brain.

      Delete
    5. Like Tal, I feel like I agree to some extend with all of you, so I’d only touch on what I disagree on :)

      From your last paragraph, @Callum, it seems that you think some discipline is a cognitive science if and only if it is concerned with probing the precise neural instantiation of some theory of the mind?
      If so (and I might be misunderstanding) I don’t think I agree.
      Look at numerical cognition, for example. Not all the research being done there is concerned with what kind of circuitry can instantiate the ANS vs. precise counting, but I think you’d still consider work on counting abilities as cognitive work?

      We agree that trying to ground abstract theories in neurobiology is the end goal, but I do not think it is fair to argue that it is the only valuable cognitive contribution.
      In fact, computer science shows how it is perfectly possible to ask questions about computations that are detached from the specifics of physical implementation. I.e. I could build a Turing machine with a sequence of buckets and it would still be performing the same core operations, and it will still give us insights about the intrinsic complexity/cost/etc. of some processes over the other — which can be used to guide questions about implementation.

      Delete
    6. Not quite but I can see why you'd take it that way. Let me be clear straight up that I know next to nothing about the neural instantiation of anything, so I'm not defending a particular domain in which I have a vested interest. What I'm saying is simply that you do have to have some theory of the mind to be getting on with, and though I do believe that there is such a theory to derive from the many insights of GG, I take its insights to be specifications of what a theory of the mind must account for, rather than GG actually being such a theory in its own right.

      Here, I can understand that to some I might seem to be merely asserting that GG is not a theory of the mind, seeing as it so often says of itself that it is, but my whole point is that its exactly those statements that are the assertions - it's not that I'm trying to pull GG down, it's just that I'm not taking it at its word on what it thinks it's a theory of.

      Delete
    7. Can't we invoke Marr's levels here and say that a computational theory of some aspect of the mind (say, the language faculty) is still a theory of that aspect of the mind, even if it isn't algorithmic or implementational?

      Delete
    8. In principle, yes; for what it's worth, Marr himself thought of Chomsky's work as a parade case of computational level analysis. In practice, many theoretical decisions in GG are motivated by considerations such as typology and "simplicity" whose cognitive relevance is indirect at best. Not a lot of linguists are interested in whether their theory helps address the computational problem, which I see as "how do you learn language from the input you have".

      (As a side note, the references to cognitive neuroscience in this debate seem to me to be a bit of distraction, though. I don't think linguists *should* make predictions about the brain, except in the trivial sense that all cognitive functions should ultimately be implemented in the brain.)

      Delete
    9. Maybe my comments have been confusing - for my part, I don't think our problem is that we need an implementational theory (I agree with Tal that linguists don't need to be making any predictions about the brain), I've just been trying to stress that we do need a theory that could in principle be implemented (i.e. it must actually be about the brain).

      So, my contention, really, is that a Marr-like computational theory of the language faculty is exactly what we should want but it's not what we have. It isn't that generative theory is a wrong theory of the mind, it's just that it's a theory of something other than the mind, so the task of turning it into a mental theory is still ahead of us.

      And why isn't it a theory of the mind when it says so strenuously that it is? In the briefest possible terms, because generative theory's standard fare is to give computational descriptions of language use, yet there is no guarantee that these descriptions hold in any non-trivial way of computations that are mentally represented. Or, to put it in blunter terms, you can't just give a syntax tree to a piece of performance data and say that I-language is this but in between the ears. The theory supposes that there is nothing problematic about how it transposes descriptions of E-language onto a model of I-language but all it has here is supposition, and if anyone were to actually develop an argument for the soundness of it, then they'd have the theory of the mind that everyone (apparently) wants.

      Delete
    10. But if part of the theory is that performance gives some indication of competence (i.e., that acceptability is reliably related to grammaticality), then it would be a theory of mind, after all, wouldn't it? If what you are saying is that you don't believe that we understand the relationship between performance and competance, then you are free to say you don't believe the theory, but you can't say it isn't a theory of what it purports to be a theory of.

      Delete
    11. If part of the theory is that performance gives an indication of competence, so that it can analyze performance and thereby be a theory of mind, then part of the theory amounts to the very claim that it is a theory of mind. Now, all I'm saying is that if this part of the theory is wrong, then the theory as a whole is wrong that it is a theory of mind.

      The way in which you might read this as being just the same as a disbelief in the entire theory is if you imagine that generative theory is so completely dependent upon being a theory of mind via its understanding of the competence/performance relationship that we couldn't possibly get rid of all that and still have something recognizable. This is quite reasonable, given the way that generative theory is often portrayed in order to stake its claim as a cognitive science, but I think it's too drastic precisely because I think that claim is mostly bluster. Get rid of all its suppositions about the competence/performance relationship, I say, and we're still left with a damn good theory of something, it's just not a theory of mind.

      Delete
    12. I think the problem here is that we're all treating a gradable issue as a categorical one. The issue is not whether generative grammar is a theory of mind, but how many boxes one has to check to be an insightful theory of mind. Omer is going with the minimal criterion --- it describes a mental capacity, so it's cognitive science and a theory of mind. Callum directly argues against that: even if it studies an aspect of human cognition, its methodology might make it impossible to bring any of what we've learned to bear on cognition. And William points out that even if one grants that such a translation step is possible in principle, that still falls short of what large parts of the cog-sci community considers the bare minimum for an insightful theory of mind because there's so little actual work on taking this translation step. From that perspective, generative grammar is all build-up with no pay-off.

      All of this kind of reminds me of discussions I have had with undergrads when teaching a syntax class. Of course the class starts out with the cognitive angle --- what are the rules, how are they computed, how are they acquired. And that gets them fired up. But then comes the long slog of fine-grained analysis, from passives to wh-questions, raising, and ECM. When a student asks about acquisition, I have to tell them that only a few learning algorithms have ever been built on top of transformational grammar, many don't work (e.g. trigger learner), and where we have had successes (e.g. Charles Yang's work), they do not hinge on issues such as whether ECM involves actual movement. Same for processing, same for computation: very little builds directly on generative grammar, and the work that does rarely hinges on the details we discussed in the course. Then the students are understably miffed, the whole class was basically a bait-and-switch. Technically they still learned a lot about an important aspect of human cognition, but it's far from obvious that what they learned got them much closer to what they were promised, or that other routes wouldn't have been more productive. I usually use that as an opportunity to get them into my computational courses...

      Delete
    13. I agree that one always needs a linking function between the stuff between the ears (grammar) and the behavior you can measure (acceptability judgments). The linking function used by theoretical linguists is quite naive: an acceptability difference between two sentences that are matched in all other respects reflects a grammaticality difference (except if we don't like the result, as in the missing VP effect in doubly center embedded sentences, in which case we hope that one day the psycholinguists will figure it out). You can certainly take issue with this particular linking function, but I don't think it's true that linguists don't (implicitly) assume any linking function at all.

      Delete
    14. I can just about accept with Roland Barthes that the author is dead, so that a linguist might propose a theory of X that other linguists declare not to be a theory of X at all (but a theory of Y), but when you say "the theory as a whole is wrong that it is a theory of mind" I feel you're going full-tilt Derrida on me. It seems to me that the mentalist commitments of GG have a huge influence on the way the theory is developed. You can see the mentalist commitments poke through to the surface all the time, for example when somebody questions whether Principle C is part of binding theory by pointing out that Principle C like effects occur not only intersententially but inter-speaker (A. John just walked in. B: Yes, and he/#John has already gotten into a fight). An implicit premise of the argument is that we can't compute c-command from the sentence of one speaker to the sentence of another, which seems as though it doesn't need to be stated because of mentalist commitments.

      Delete
    15. Haha! I don't mean to be throwing postmodern nonsense around - Thomas's reply just above is a very sensible way of tempering some of what I've said. But is your point about Principle C really a point about the intrinsic mentalism of GG? What I think you've pointed out is that people who work in GG and who are committed to having a plausible mental theory will happily throw a putative principle of GG under the bus if it isn't plausibly mental. It's the theorists rather than the theory that have mentalist commitments and all I'm saying is that if they follow those commitments as far as possible, they might end up at the realization that they're not describing mental representations when giving structural analyses of performance data.

      Delete
  4. I'll have to read through all of this interesting discussion in more depth later. I just want to respond to Omer's first point: I was not saying that generative linguists should *switch* interests. I was saying that there is an intersection between the things that are interesting/important from the philosophy of generative grammar (models of linguistic performance based on competence) AND things that are interesting to cognitive (neuro)scientists. If you are going to choose among interesting/important things, why not choose things that are particularly interesting/important and open up well-paid collaborations? As a side note, my comments are largely directed towards students who might very well not have figured out what *is* interesting and important to them.

    ReplyDelete