Comments

Thursday, March 17, 2016

Crows

Eric Raimy sent me this link to a piece on the nuero of crows (here). The piece argues that it is cog-neuro should add corvids to the list of "model organisms" (a distinguished group: zebra fish larvae, c-elegans worms, fruit flies and mice). Why? Well that's why I am linking to this piece. The reasoning is interesting for linguists. Let me indulge myself a bit.

There is a common assumption within linguistics that more is better. In particular, the more languages we study the better it is for linguistics. The assumption is that the best way to study what linguists should study is by looking at more and more languages. Why do we think this? I am not sure. Here are two possible reasons.

First, linguistics is the study of language and thus the more languages we study the further we advance this study. There are indeed some linguists that so consider the enterprise. I am not one of these. I am of the opinion that for modern GG the object of study is not languages but the faculty of language (FL), and if this is what you aim to understand then the idea that we should study more and more languages for each language studies advances our insight into the structure of FL needs some argument. It may, of course, be correct, but it needs an argument.

One possible argument is that unless we study a wide variety of languages we will not be able to discern how much languages vary (the right parameters) and so will mistake the structure of the invariances. So, if you want to get the invariant properties of FL right, you need to control for the variation and this can only be controlled by wide cross linguistic investigation. Ergo, we must study lots of languages.

I am on record to being skeptical that this is so. IMO, what we have found over the last 30 years (if not longer) is that languages do not change that much. The generalizations that were discovered mainly in the basis of a few languages seem to have held up pretty well over time. So, I personally find this particular reason on the weak side. Moreover, the correct calculation is not whether cross linguistic study is ever useful. Of course it is. Rather the question is whether it is a preferred way of proceeding. It is very labor intensive and quite hard. So we need to know how big the payoffs are. So, though there need be nothing wrong with this kind of inquiry, the presupposition that this is the right way to proceed and that every linguist ought to be grounded in work on some interesting (i.e. not English) language goes way beyond this anodyne prescription.

Note that the author of the Nautilus piece provides arguments for each of the model animals. Zebra fish larvae and c-elegans are there because it is easy to look into their brains. Fruit flies and mice have "easily tweak able genes." So, wrt the project of understanding neural mechanisms, these are good animals to study. Not the assumption is that the mechanisms are largely the same across animals and so we choose the ones to study on purely pragmatic grounds. Why add the corvid? Precisely because it raises an iterating question about what the neocortex adds to higher cognition. It seems that corvids are very smart but have none. Hence they are interesting.

The linguistic analogue of this sort of reasoning should be obvious. We should study language X because it makes, say, binding, easy to study because it marks in overt morphological form the underlying categories that we are interested in. Or, we should study language X because it shows the same profiles as language Y but, say, without overt movement hence suggesting that we need to refine our understanding of movement. There are good pragmatic reasons for studying a heretofore un(der) studied language. But not, these are pragmatic considerations, not principled ones.

Second, that's what linguists are trained to do, so that's what we should do. This is, I am sure we can all agree, a terrible argument. We should not be (like psychologists) a field that defines itself by the tools that it exploits. Technology is good when it embodies our leading insights. Otherwise it is only justifiable on pragmatic grounds. Linguistics is not the study of things using field methods. It is the study of FL and field methods are useful tools in advancing this study. Period.

I should add that I believe that there are good pragmatic reasons for looking at lots of languages. It is indeed true that at times a language makes manifest on the surface pieces of underlying structure that are hard to discern in English or (German or French, to name just two obvious dialects of English). However, my point here is not to dismiss cross ling work, but to argue against the assumption that this is obviously a good thing to do. Not only is this far from evident, IMO, but it also is far from clear to me that intensive study of a single language is less informative than the extensive study of many.

Just to meet expectations, let me add that I think that POS considerations, which are based on the intensive study of single Gs, is a much underused tool of investigation. Moreover, results based on POS reasoning are considered far more suspect than are those based on cross linguistic investigation. My belief is that this has things exactly backwards. However, I have made this point before, so I will not belabor it now.

Let me return to the linked paper and add one more point. The last paragraph is where we find the argument for adding corvids to our list of model animals (Btw, if corvids are that interesting and smart there arises a moral issue of whether we should be making them torture neuro subjects. I am not sure that we have a right to so treat them).

If, as Nieder told me, “the codes in the avian NCL and the mammalian PFC are the same, it suggests that there is one best neuronal solution to a common functional problem”—be it counting or abstract reasoning. What’s fascinating is that these common computations come from such different machinery. One explanation for this evolutionary convergence could be that—beyond some basic requirements in processing—the manner in which neurons are connected does not make much difference: Perhaps different wiring in the NCL and PFC still somehow leads to the same neural dynamics.

The next step in corvid neuroscience would be to uncover exactly how neurons arrive at solutions to computational challenges. Finding out how common solutions come from different hardware may very well be the key to understanding how neurons, in any organism, give rise to intelligence.

So, what makes corvids interesting is that they suggest that the neural code is somewhat independent of neural architecture. This kind of functionalism was something that Hilary Putnam was one of the first to emphasize. Moreover, as Eric noted in his e-mail to me, it is also the kind of thing that might shed light on some Gallistel like considerations (the kind of information carried is independent of the kind of nets we have which would make sense if the information is not carried in the net architecture).

To end: corvids are neat! Corvid brains might be neurally important. The first is important for youtube, the second for neuroscience. So too in linguistics.






8 comments:

  1. In this context, also see this upcoming paper in TCS with a somewhat different take: "These findings offer a sobering lesson: there seem to be only limited degrees of freedom in generating neural structures that support complex cognition." (p. 10, doi: 10.1016/j.tics.2016.02.001)

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. I think this misses one of the central functions that work on languages-other-than-the-usual-suspects has served. It's not just adding to our inventory of discoveries (though it does that too); it's also weeding out some alleged "discoveries" that seemed right when looking at the initial sample of languages, but turned out to be wrong once more languages were looked at. (You write, "The generalizations that were discovered mainly in the basis of a few languages seem to have held up pretty well over time." I would say that sentence is missing "a good number, though not all, of [...]" at the beginning of it. )

    Some notables:

    - the Case Filter (at least understood as having anything to do with the notion of case from which it gets its name)
    - the Activity Condition
    - Feature Inheritance
    - uninterpretability (i.e., unvalued features being crash-inducing) – you knew I couldn't resist including this

    It's interesting to juxtapose this with the Poverty-of-Stimulus considerations you mention. To take one example: I'm betting that at least some of the effects the GB-era Case Filter was supposed to capture in English are not learnable from Primary Linguistic Data. If the Case Filter is not real – or, equivalently in my view, if the Case Filter deals with assigners and assignees in a manner that's entirely divorcible from the surface structure of sentences; in particular, assigners and assignees can be matched in a way that fully ignores conflicting information from the morphological signal – then we're back to square one w.r.t. the Poverty of Stimulus. That is, the child can't possibly figure out from the signal that English infinitives cannot assign abstract nominative (since the notion of "nominative" relevant to the classical Case Filter is not about the signal, and regardless of that morphological nominative is available in infinitives crosslinguistically). Now, the fact is we English speakers all end up in a knowledge state where "*John tried Bill to win" is ungrammatical. But it seems to me that the explanation we thought we had for how we end up in that knowledge state just doesn't work. And one needs to look at languages like Hungarian, Icelandic, and beyond, to see this.

    That, of course, just reaffirms that we have a puzzle on our hands. But it's better to know that than to be under the mistaken impression that the puzzle was solved.

    ReplyDelete
    Replies
    1. Agreeing with Omer here, also, the idea popular from the birth of the MP at least until the submission of Baker´s 2008 book on agreement that agreement (include NP-internal concord similar phenomena) could be explained in terms of local relationships. This idea was obviously hopeless on the basis of substantial literature from 1995 (Nick Evans' thesis book on Kayardild, and the papers in the Plank's _Double Case_ book, plus Terry Klokeid's thesis on Lardil), but as late as 2008 people were still trying to make it work, by trying to come up with ways of stuffing all the agreeing elements through the appropriate functional projection.

      & for a hypothetical case, imagine if generative grammar had started in the Southeast Asian/Western Austronesian area, where it is possible to argue that the major languages have no morphosyntactic features. There would then be no really solid, easily understandable evidence for multiattachment as part of the implementation of Raising, until people eventually got around to poking into obscure and far off languages such as Icelandic and Ancient Greek. Which would very likely be roughly now, if the discipline in our alternate timeline developed about as fast as it has in the real one. Or never, if the "don't look at lots of languages" philosophy were to prevail.

      Delete
    2. I don't want to get into a long debate about this because I think that you guys are basically right. We have learned a lot by looking at other languages. What I was questioning is the assumption that this kind of investigation is "obviously a good thing." What I meant by this is that it needs no independent justification, but is taken as obviously worthwhile a priori. I suspect that this is the default assumption because linguistics is taken (and linguistic take themselves) to be the study of languages. That's what linguists do (and this is why when you tell someone that you are a linguist the first question they ask is how many languages you speak).

      As you might guess, I don't think that this is what I do or what many other GGers do or what the Chomsky take on linguistics is all about. We study a faculty. The products of that faculty are interesting to the degree that we think that they will shed light on its structure. So there is an excellent PRAGMATIC reason for studying different languages for there is often an argument to be made that they will reveal some facet of the faculty that heretofore studied languages have obscured or misdescribed. Fine. That's the reason the linked to article made for studying corvids and taking them as another model animal. I am suggesting that we take the same attitude.

      Now will that make a practical difference? Maybe. For it would focus questions on what the new study brings to the basic concerns. In other words it will make the following question reasonable: why should we study language X? Describing another G is probably fun. But we should want more than that. Asking that people justify their choice of language to study might help focus these concerns. If it does, the skepticism I am expressing will have served its purpose.

      So, sure, studying different languages has been very useful. And there is no reason not to study typologically different kinds. But, the default assumption that this is the WAY to do linguistics, well there I am not willing to go. I want pragmatic justification. Thx to Omer and Avery for providing some.

      Delete
    3. I contend that it needs no more justification than the rather banal observation since there is more than one language, we need to look at a 'reasonable sample' (which, given the traditional 'languistics' figure of 6000, rubbery and inadequate as it is, is going to be a fair few) in order to have adequately founded ideas of what is in the 'faculty'.

      Sentiment borne out by the work on Greenberg's Universal 20, first formulated long before there were any theoretical ideas that it could have been used to support, but now capable of delivering some rather interesting observations, such as that the nominal head can appear unexpectedly far towards the front of the NP, but not towards the back.

      What is true of course is that it is not the only necessary activity in linguistics (eg it isn't 'the WAY to do linguuistcs'), but it is a necessary part of the whole activity. The idea that generative grammar can be done on the basis of the study of a single language has happily been widely ignored since the beginning of the field (Postal doing Mohawk, Matthews doing Hidatsa, what else that I'm not remembering), but it is a major source of wrong-headed criticism and objections, and things would be better if this idea had never gotten any kind of rhetorical support at all.

      & Asking people to justify their choice of language to study in a serious way as and always has been a grave error, but perhaps for a shifting balance of reasons over time: 40-50 years ago, it was impossible to guess what interesting thing might turn up before taking a look, while in the much more developed landscape today, there will almost certainly be something. By 'in a serious' way I mean expecting an authentic, genuinely thought answer, as opposed to the spurious rubbish that people are trained to spit out in the modern world, and have to get reasonably good at to survive in it. The real answer is, I think, in most cases, that for some reason or other you bumped into it and something stuck.

      Delete
  4. I agree (once again) with Norbert. It's interesting to think about the use of terms like "understudied languages" in relation to these concerns.

    ReplyDelete
  5. Norbert is of course right that generative linguistics is about trying to understand the language faculty, for which all sorts of evidence is relevant. I think the question that is being got at here is whether the study of one language is sufficient to get to the best understanding of this faculty that we can achieve. But I’m broadly with Omer/Avery here in thinking that it’s not sufficient, which means that to get a better understanding of the language faculty, we should bring in further evidence. Part of the language faculty is the interfaces with SM (what Chomsky is now calling externalisation), and different languages just do different things at SM. What they do at SM can give us evidence for what happens inside the computation itself (that nature of dependency formation). Beyond the interfaces, I think that studying one language may also logically lead to a loss of information about the computational system. Imagine the one language we had access to study is a language where everything stays in situ, with overt operators in scope positions, so it’s a bit like predicate logic. We’d have no evidence for Internal Merge, and then we might have thought that Merge itself couldn’t be the right model, because Merge predicts internal applications of itself. We’d then have to go with a model where we have an extrinsically restricted version of Merge plus some extrinsically restricted scope calculation mechanism. Which would be a different and less elegant view than what we currently have.

    There are also more than just the practical issues alluded to so far. A lot of progress in science is made by people finding a problem that they themselves find interesting, working like crazy on it, and coming up with a new way of thinking that impacts on the whole theory. So if particular linguists find themselves puzzled by some pattern they’ve discovered in reconstruction of binding effects in Walmatjari noun phrases, I’d certainly want them to feel that that was a legitimate thing to work on. Even if they could have got to the same final conclusions by studying English noun phrases. The modal could here is crucial: they could have got there, but, given their interest is in Walmatjari, they may never have done. And we may have waited a long time, perhaps forever, for someone else to have done. I think this circumstance is pretty common actually. So individual interest is an important driver of science, and our field should, I think, be supportive of people doing work on what they are interested in, even if it is not logically necessary it may be practically necessary, given the basic limitations of human beings. Science is done by humans, and one of the most pernicious effects of the corporatization of universities is the idea that the direction of science can somehow be antecedently computed and funding can be directed in the directions of the results of those computations. That means not clever people to work on things they find interesting, but hiring them to do things you want them to do. A strategy for long term stultification, I think.

    ReplyDelete