Wednesday, February 4, 2015

Dualism

I’ve lifted this from the comments section of this as I found the question raised interesting and it got me thinking. The comment, nor surprisingly, comes from Mark Johnson. It asks, in effect, what the consequences of Chomsky’s observations concerning classical dualism and its demise are for studying how brains incarnate minds? In his own words:

I'm not sure I'm correctly understanding C's point about Newton destroying Cartesian dualism by destroying a purely contact-based mechanistic world; we all now believe in non-contact based forces like gravity. Is the point that we should stop worrying about how mental phenomena are instantiated in the brain, and just accept them as mysterious primitives?

This got me thinking: say Chomsky’s history is right (and I think he makes a good case for it being so) what’s the take home message for us. Here’s what I replied”

You ask a good question. Here's how I understand things. The most important consequence of the dissolution of the classical mind/body problem is what it says about how unification is to proceed. In the classic version, we KNEW what physical forces HAD to be: contact mechanics. Thus, unification meant reduction to physics as we know the limits of physical explanation. Once we give this up, we don't actually know the "limits" of the physical; after all, if God can link gravity, an occult property to matter, then she can link anything to matter. Thus unification can go either in the direction of reduction or in a more roundabout way by changing the "reducing" science and leaving the "reduced" one more or less as is. This is exemplified by Chomsky's discussion of chemistry and physics. The classical mind body conception privileged physics and required that every other science march to its tune. With tis demise there is no preferred direction of unification: either the physics changes or the chemistry or both.

So, no there is what has been called Broca's Question in language: how do brains realize minds. But with the demise of the classical mind/body conception there is no reason to believe that the neurologists have it right and the cognitivists have it wrong when the two clash. The way I see it, what C is saying is that after Newton, the scientific playing field has been conceptually leveled.

This also has a practical implication: it is critical to develop "bodies of doctrine" within a discipline as well as figuring out how to unify across them. And these bodies of doctrine must be taken seriously. So, if you want to exlain how brains do language, then you need to explain how brains to the kinds of things that linguists have discovered over the last 60 years. This stuff has an integrity and counts intellectually even if there is no Nobel Prize in cognition. In reading much of the neuro literature there is this sense that the important big shot science is neuro and mental matters must just toe the line when the two appear to be in conflict. This is a residue of the dualism that Newton exploded. C's point is that this posturing has no scientific merit anymore, as we just don't know the limits of the physical/neurological.

That's how I see it.

I want to invite comments on this for I believe that it is important. Let me say how.

An important theme in Chomsky’s philo/methodological writing has been that though classical dualism has been discredited and nobody alive today wants to admit to being a dualist, a kind of methodological dualism is currently alive and well and very widespread.  Even more oddly, its chief practitioners are exactly those who go around telling us confidently (and often and loudly and in a self congratulatory tone) “There are no ghosts!” And if you don’t believe that minds are just brains at some other level of description (e.g. “You know, you can cut the mind with a knife!) then you are a religious nut who also likely sticks pins into doll effigies of your enemies.

I confess that when so assailed I am generally inclined to ask what’s so wrong with dualism? Or more correctly, what evidence is there that minds are reducible to brains?  I even once asked this of Vernon Mountcastle (a big shot neuro person who Chomsky mentions in his piece) and he replied roughly that there are no ghosts or souls and so that must be the way it is. He conceded that he had no idea how that could be, but it was clear to him that it was this way as any serious person would see because were it not so we’d have to all believe in ghosts. In short, the untenability of dualism is taken as an obvious corollary from the non-existence of ghosts/souls. Who knew? Say that we never managed to unify chemistry with (a much revised physics), would that have implied ghosts as well? Say that we never manage to unify quantum mechanics and relativity, what would that imply other than we don’t know how to put them together? But for some reason, our failure to unify mind with brain is taken as a singular failure with cosmic sociological and spiritual ramifications.[1] And this is a manifestation of a contemporary dualist mind set. How does it get expressed?

It shows up in various guises: the belief that cognitive/linguistic findings must dance to the biology’s/neuroscience’s/CS’s tunes, that behavioral studies are fine but brain measures are where truth lies, that findings in linguistic are suggestive but those in neuro-science are dispositive, the belief that if there is a conflict between linguistics and evolutionary biology then the lack of reconciliation is a problem for the former but not the latter, etc. Chomsky has rejected all of these sureties, and, IMO, rightly so.  What Mark’s comment made me see more clearly than I had until now is that these convictions are actually the distorted reflections of a discredited dualism and so are not only illegitimate but are actually the modern incarnations of the positions they claim to be dismissing.  Here’s what I mean.

Classical dualism, as Chomsky noted, had a criterion of physical intelligibility. If the explanation was not “mechanical” then it was either occult (i.e a vestige of the old discredited Aristotelianism) or trivial (e.g. sleeping pills induce sleep because they have dormitive powers). Descartes accepted this view and suggested that in light of this there had to be a second substance, res cogitans, in addition to res extensa (i.e. matter) and that the different two substances that could not be unified (though they did mysteriously come to interact in the pineal gland!). Note, that this conception allows for the possibility of two distinct sciences; a science of the physical and one of the mental, both potentially quite interesting though fundamentally incapable of unification. What Chomsky points out is that with Newton’s undermining the mechanical world view, we lost the criterion of intelligibility that it underwrote (i.e. with Newton’s physics based as it was on forces inherent in matter that could act at a distance, the criterion of intelligibility was discredited). This had two consequences: first, ‘material’ came to mean ‘whatever the physical sciences studied’ (i.e. there was no intelligibility criterion imposed on physical possibilities) and the idea that minds and matter must be different no longer held (see Chomsky’s quote of Locke wherein Locke says that if God can put gravitational attraction together with matter then s/he can also pack in mental properties if s/he so desires).

In fact with the demise of the “mechanistic” criterion of physicality, there is no a priori limit on what counts as a physical property and this is important. Why? Because this means that there is no fixed point in unification. What I mean by this is that in unifying A and B there is a priori reason for privileging the predicates of one over those of the other. Another way of saying this is that there is no longer any reason to think that unification will proceed via reduction, reduction being where the reducing theory is held to be epistemologically privileged and the predicates/laws of the reduced theory required to accommodate to those of the reducing one. Moreover, not only was this a conceptual possibility given the demise of classical dualism, but, as Chomsky noted, this is what in fact took place with chemistry and classical physics (the reducing theory changed from classical to quantum mechanics) and Gallistel noted is what occurred with classical genetics and biochemistry (i.e. the former stayed more or less the same and the latter had to be radically rethought).  This, I believe, is an important methodological lesson of Chomsky’s exposition.

Importantly, this moral has been lost on our modern dualists. Or more accurately, many contemporary scientists have misunderstood what the demise of dualism amounted to and have resurrected a methodological version of it as a result. Those in the “harder” science think that their theories do enjoy epistemological privileges. The demand, for example, that linguistics must respect current theories of brain architecture (e.g. connectionist) to be legit (rather than drawing the conclusion that linguistic insights call into question these theories of brain architecture) is an instance of this. Because these scientists understand unification in terms of reduction they take the to-be-reduced-theory/domain to be conceptually less well grounded (more epistemologically suspect) than the reducing one. And though this made sense under the classical dualist conception, it no longer makes sense now precisely because we have abandoned that which lent “materialism” its privileged place (viz.  the doctrine of physical intelligibility).

There is a moral in all of this. Two in fact.

First, that the body of doctrine that we have developed in linguistics over the last 60 years requires explanation. The fact that it’s an interesting question how brains incarnate FL does not mean that putative facts about the brain discredit the facts that we have unearthed. Indeed, we should insist loudly and on every relevant occasion that a theory of the brain will be a good/complete one just in case it finds a way to incorporate the facts that linguists have uncovered. If these don’t fit into our currently favored neuro theory of the brain, then this is a problem for the brain sciences at least as much as it is a problem for linguistics.

Second, Chomsky has identified the source of the bullying; it comes from misunderstanding the history and import of the mind/body problem. There is an interesting question that all agree would be great to crack (i.e. Broca’s Problem: how minds live on brains or how to unify thought and neural structure) but this agreement does not privilege either term of the unification relation. And anyone who thinks otherwise is a closet dualist. Say that next time you are bullied and stand back and watch the fun. 



[1] I should add that I don’t believe in ghosts but I don’t see this meaning much. It seems to me entirely possible that we will never really know how minds live on brains. It’s certainly the case that most of the hard problems in psychology (e.g. consciousness, free will, all things considered judgments) are as ill understood today as they were when Descartes first took them as evidence for a distinct mental substance. Indeed, Mountcastle agreed and said so in his lecture. So his anti-dualism is an expression of faith, rather than a conclusion of argument or inquiry. I have nothing against such ambitious goals. But ambitious goals should not be confused with established conclusions. Wishing does not make it so.

21 comments:

  1. I think I agree with you in a broad way but I was a little confused by how you framed the issue. Let me 'reduce' my differences by looking at one particular description you gave:

    "Another way of saying this is that there is no longer any reason to think that unification will proceed via reduction, reduction being where the reducing theory is held to be epistemologically privileged and the predicates/laws of the reduced theory required to accommodate to those of the reducing one."

    It looks to me like this slightly confuses notions of principle and practice. First, we have to establish the principle of whether or not scientific theories at different levels of description should be capable of being unified, and, second, if we agree to that, we have to establish a practice of how to go about such unification.

    If we deny the possibility of unification - some level of description is fully emergent and cannot be explained in terms of lower levels of naturalistic phenomena - then we're already done. No need to consider reductionism of any sort.

    But if we pursue unification, where my conception seems to diverge from yours is that I see unification as a *necessarily* reductive enterprise BUT its methodology has nothing intrinsic to it which says that one level of description has any epistemological privilege over another.

    So, of course, when we try to unify theories, we have to tweak them at all levels in ways that seem sensible to try to get them to comport with each other, but when you cite examples such as neuro- people trying to undermine findings in linguistics, it doesn't look to me like they have a deep philosophical misunderstanding about the process of unification, it looks like they have a lack of imagination and far too much confidence in their own theories. This just means that they've got some aspect of the practice wrong, rather than the principle which is what you seem to be going for, and I think that's somewhat implicit in some of your other comments which seem to want linguistics to be, in principle, explicable at a neurological level. This is reduction even if our understanding of neurology has to significantly change.

    To say that their unification *doesn't* have to be reductive seems to me to amount to saying that neurology may one day have to 'look' more like linguistics, which doesn't make sense. That's a little vague though, so let me put it like this: as far as I'm aware, it's accepted that, if unification is possible, then it's based on the idea that emergence is one-directional: chemistry emerges out of physics; physics does not emerge out of chemistry. Unification is therefore reductive because there is an arrow of causation to follow. To say otherwise is to say that emergence is multi-directional which I don't think is true (neurology does not emerge from linguistics). So to want to be reductive is, in my view, to be on the right track in terms of the principle of unification, and if people want to claim that because they're working at some lower level of description than other researchers their theories therefore have precedence, that just means they're arrogant rather than that they're dualists.

    ReplyDelete
    Replies
    1. Thx for the comment. I think that we sort of agree. So, I do not think that we will ever find neurology reduced to linguistics. FoL supervenes on brain organization of some sort, not the other way around. However, I think that reduction has come with a kind of bias, the assumption being that the reducing science is on some sense more epistemologically privileged. If I understand what you are saying, it is that the reducing science is metaphysically more privileged (no brain states then no FoL) but that it is not methodologically so. I think I agree with the caveat that we are not generally in the know about what the reducing science looks like until the reduction is done. So, using Chomsky's example, chemistry did not reduce to 19th century physics, it reduced to 20th century quantum mechanics whose properties were entirely different from those of its 20th century predecessor. Indeed, one might say that the latter theory had to absorb some of the concepts of 19th century chemistry (e.g. it had to make room for valence in the conception of discrete quantum levels with restrictions on how many electrons could live on each level). So, in a sense, latter day physics accommodate earlier chemistry by chaining in the direction that the latter required. Or to put this more baldly, chemistry accommodated to the physics and not vice versa.that said, valences then become the derived concept, understood in terms of quantum levels and exclusion principles.

      This, I believe is where we agree. What I was trying to suggest that Chomsky added to this is a bit of psycho-history. He observes that in current discussions the to-do-mor-metaphysically basic theorists believe that they also have certain epistemological privileges when it comes to unification. You call this "a lack of imagination and far too much confidence in their own theories." I agree. But I believe that Chomsky is suggesting (and here I think I agree) that this overconfidence is the vestige of an earlier understandable position. In 17th century there was a criterion of physical intelligibility that was widely shared among leading scientists that were it true would indeed privilege the metaphysically basic science. What Chomsky is suggesting is that even after this privilege was upended by Newton, the epistemic privilege was retained. This was unjustified given what Newton had done, but it's what happened. So the source for the lack of imagination and over confidence lies in this earlier misunderstanding of what happened when dualism was "defeated." What many took away was that physical concepts and theories (specifiable in some non- circular or trivial sense) always trump mentalistic ones methodologically as the latter are in some sense suspect until explicated in physical terms. But this is wrong: when there are problems of unification all concepts are equally suspect in the sense of being liable to revision.

      So, I think we more or less see things the same way. I find Chomsky's psycho reconstruction plausible. We both agree that whether it is or not, the idea that when two theories are unified one has epistemological privilege over the other is wrong-headed. Sadly it is also rampant.

      Delete
    2. It does look like we agree! Chomsky's perspective is new to me and it took a little to get my head round but it's very interesting to think about. Thanks.

      Delete
  2. Is it really the case that Chomsky believes that, when comparing more abstract theories with less abstract ones, neither enjoys epistemological privilege?

    I got to wondering about this when reading the little paper Norbert just posted. In each of historical examples involving friction between the more abstract theories and less abstract ones, it is *always* the case that the less abstract one is the one that bends to the more abstract one.

    Is it ever possible for there to be unification via pure reduction (that is, unification that leaves the less abstract theory unchanged)? I suspect that Chomsky thinks not, following his Hume discussion on page 7. We have our immediate impressions and everything at the end of the day has to bend to them, not the more provisional, less abstract theories.

    So is the epistemological parity that Norbert and Callum are talking about really the right way to go? I think I get a different sense.

    ReplyDelete
  3. I'm not sure what you mean by more or less abstract. Is chemistry more or less abstract than physics? Both were very abstract. In fact when chemistry got reduced the notions like valence etc were retained more or less intact and how valence "worked" is explained via the principles of quantum mechanics. Ditto with classical genetics and biochemistry around Watson and Crick.

    I think that the point that Chomsky wished to make was that ex ante there is no knowing which theory will have to change to accommodate the reduction. In the chemistry case it was physics. In the generics case it was biochemistry. If Gallistel is right, if we are to unify cognition and neuroscience then the latter will need real revision. That's how I understand things.

    ReplyDelete
    Replies
    1. I think, Nobert, I agree with the essence of what Audrey is trying to communicate here.

      It is true, logically, there is no reason to expect one or the other to be more successful. However, throughout Chomsky, and for that matter Marr, have suggested getting the higher (computational) level understanding first, and then trying to understand the lower (implementational) level in terms that. [Chemistry can be thought of as the higher level theory, and Physics the lower level theory].

      Both Chomsky and Marr (correctly, IMHO) prioritize the higher level understanding in their writings (Aspects, Vision). In fact, it seems to me, even the following statement that you made follows only if you think the higher level understanding is some how privileged:

      >> So, if you want to explain how brains do language, then you need to explain how brains to the kinds of things that linguists have discovered over the last 60 years.

      If on the other hand there is no privileged status for the higher level theory, then it is unclear to me why neuroscientists should look at what linguists have said in the last 60 years to explain how the brain does language. They might as well not look at what linguists have found, based on the logically possible assumption (which some do make) that it is we linguists who have gotten the theory/descriptions wrong, and not them.

      Delete
    2. Thx for the push back. I think that you and Audrey are pointing to something important; namely that "higher" level theory in the sense of Marr has an integrity and legitimacy independent of the lower level theories that it (hopefully) relates to. However, and I think that neither Chomsky nor Marr would disagree, this integrity does not confer epistemic privilege when issues of unification are at issue. In other words, in order to unify it is legit to change both theories. So, say, as seems inconceivable at present, we find some way of distinguishing "tree" circuits from "set" circuits in the brain. And say that we find that the brain likes tree circuits but hates set circuits (remember this is scifi). Then the price for unification might be abandoning Chomsky's current conception of Merge precisely because we have neurological evidence that sets are artifacta non grata for brains. If this were to happen, then we would have a good argument against Merge being an operation that forms units like {a,b}. The higher level theory would have been changed to effect unification. Would this be a good argument? I think so, yes. Is the higher level theory "immune" from revision? No.

      That said, at any given time some theories are more well founded than others. I personally believe that we know more about how the brain does cognition than how it computes. Thus I tend to privilege the higher level theory in Marr's sense over the neurostuff. However, this is a fact of investigative history, not of principle. The mechanical picture that undergirded mind/body dualism was licensed by a criterion of intelligibility. This gives the concepts based on this criterion a certain inviolability. With that abandoned, for reasons Chomsky articulated, there is no longer a principled way of conferring epistemic privilege. When things don't fit, all the parts are up for grabs. None has more principled staying power than the others.

      So why should neuroscientists listen to linguists? Because we have found things out about the mind and if you believe that minds are what brains secrete then we have also found out something about brains. They do certain sorts of computations etc. See Gallistel here. So they should listen because we have discovered some truths. Are they "perfect"? No. Are they worth knowing about? Sure. So this is not a matter of principle, but simply a matter of attending to the evidence.

      Hope this helps. thx again to you and Audrey.

      Delete
    3. So my real question is somewhat different.

      Norbert says: In the chemistry case it was physics. In the generics case it was biochemistry. If Gallistel is right, if we are to unify cognition and neuroscience then the latter will need real revision

      And that's how I see it too. But in each of those pairs there is a 'higher' and a 'lower' field in the Marrian sense. And in each pair it is the the higher case that impinges on the lower one. I say 'impinges' but I want this and other words I use, like 'alter', 'amend', to mean 'whatever we're calling what happened in the unification of physics and chemistry'

      Now I also get that it is logically possible for the reverse to hold: the lower effectively demanding the alteration of the higher one, like in the sci-fi circuitry case.

      My real question is why the recourse to a sci-fi example? Is there no actual example in annals of science where the lower level theory informs the higher one in that way? I don't expect anyone to be a human encyclopedia in these matters (expect maybe Chomsky), but therein lies my curiosity.

      If it's the case that, even though logically possible, there simply hasn't been a clear example of the lower level theory altering the higher one in the ways we've been talking about with physics/chemistry etc. Then that's a very surprising fact and one that demands an (potentially very cool) explanation.

      I can imagine totally trivial cases: our cognitive theories have to be constrained by the fact that there is no direct neuron-to-neuron communication from one person to another. But that seems to be of a different kind than the other case. Why? well, no cognitive theory would need to be amended in light of that fact, it's more like a low-bar entrance exam for any non laughable cognitive theory.

      Delete
    4. It seems to me that Norbert and Chomsky have pretty different views on this. Norbert seems to think that in the case of linguistics and neuroscience, linguistics has a sufficiently established body of work for the neuroscience of language to follow. But, otherwise, there is no methodological priority for either the higher-level or lower-level theory.

      On this note, it seems to me, that Chomsky’s own lessons from history are quite different. In fact, Chomsky has emphasized right from the beginning that the higher-level theory is methodologically more important (1). And, Marr makes a similar point (2). [But, there is a small danger in conflating “higher-level” with “Computational” - at least sometimes, Marr uses “Computational” to mean something different from “higher-level”]. For both of them, it is not about whether the body of work at a particular level is sufficiently fleshed out, but really that there is a certain methodological priority to one over the other.

      Back in the 60’s, we surely knew very little about the cognition of language. In fact, none of the important findings that Norbert lists in other blog posts for syntax were even discovered by then. Yet, Chomsky had the same methodological point of view back then. Funnily enough, as I see it, Norbert would have been radically at odds with the early Chomsky in this particular aspect.

      And, to come back to Audrey’s question: as I read it, I don’t think at least Chomsky/Marr is aware of any cases where the lower-level has instructed the higher-level (though logically possible). Otherwise, the quotes and their positions would not make any sense.


      Here are the relevant quotes:
      (1) “There seems to be little reason to question the traditional view that investigation of performance will proceed only so far as understanding of underlying competence permits. Furthermore, recent work on performance seems to give new support to this assumption. To my knowledge, the only concrete results that have been achieved and the only clear suggestions than have been put forth concerning the theory of performance, outside of phonetics, have come studies of performance models that incorporate generative grammars of specific kinds - that is, from studies that have been based on assumptions about underlying competence.” (Chomsky, Aspects, pg. 10)

      (2) Although algorithms and mechanisms are empirically more accessible, it is the top level, the level of computational theory, which is critically important from an information-processing point of view. The reason for this is that the nature of the computations that underlie perception depends more upon the computational problems that have to be solved than upon the particular hardware in which their solutions are implemented. To phrase the matter another way, an algorithm is likely to be understood more readily by understanding the nature of the problem being solved than by examining the mechanism (and the hardware) in which it is embodied. (Marr, Vision, pg. 27)

      Delete
    5. Again, it depends what one means by higher and lower. But there are cases in the history of science where the lower levels have had impact on this higher level descriptions. For example, think of Aristotelian theories where motion included growth and locomotion. As we developed specific theories of motion we rejected this overall categorization of things and distinguished three different things as the physical mechanisms were different.

      Other cases also come to mind. So the bean bag theory of genetics is now understood to be a radical simplification as the how genes are actually laid out has an impact on how they can interact. Another is thermodynamics. When it reduced to statistical mechanics it became evident that thermodynamic effects were in principle reversible, just not very likely. This in fact caused a bit of a stir on the 19th century and was one of the things, apparently, that drove Boltzmann a bit batty. So I think that there are cases (and probably a lot of them) where understanding the underlying mechanism in the reducing theory leads to important changes in the reduced theory; it gets refined, some distinctions get put aside as "simplifications" etc. Of course, if the reduced theory has some oomph then it is unlikely that ALL of its distinctions and structure will disappear as it is common in unification accounts to derive the results of the reduced account as limit properties (think classical and relativistic physics). Actually come to think of it, conceptually reducing classical mechanics to quantum mechanics requires a rather different conception of causality.

      So there are cases where unification has importantly reinterpreted the details of the reduced theory. However, to repeat, if the reduced theory has any good structure (i.e. has marks of being true) then a successful reduction will preserve these in general so that the degree of change might not be that evident. The cases Chomsky and Gallistel discuss are cases where the reducing theory CANNOT unify with the reduced one. But more common, I suspect, is where unification is possible but the reduced theory must be somewhat reinterpreted, as should be permitted if its categories are not epistemologically sacrosanct.

      Delete
    6. @Karthik: "Back in the 60’s, we surely knew very little about the cognition of language. In fact, none of the important findings that Norbert lists in other blog posts for syntax were even discovered by then. Yet, Chomsky had the same methodological point of view back then. Funnily enough, as I see it, Norbert would have been radically at odds with the early Chomsky in this particular aspect."

      Maybe I would have, but I doubt it. What Chomsky showed back then is more along the lines of the argument that Gallistel is making now: viz. that none of the extent theories of language could accommodate even the most simple features of it. There was nothing there to replace, or this is how I read CHomsky's review of Skinner: it was either trivial or obviously false.

      The case Chomsky is interested in, I believe, is where there are two bodies of doctrine, both well grounded theoretically and empirically and the question is how to unify them. THis seems to me a different question. I should also add that Chomsky's initial proposals about how language is organized WAS debatable. And it was debated. So skepticism then was not entirely misplaced.

      One last thing: I am not sure that I see the competence/performance distinction as pointing to the same issues of unification that Chomsky was discussing. But I may be myopic here.

      Delete
    7. >>What Chomsky showed back then is more along the lines of the argument that Gallistel is making now:

      I don’t deny this, of course. All I was trying to point out was that the linguistic theories back then were far less developed, yet Chomsky proposed the same thing: that our understanding of performance systems will/should follow competence. *Not* because the current low-level theories couldn’t account for the data, but because that is the “traditionally” correct thing to do (“There seems to be little reason to question the traditional view that investigation of performance will proceed only so far as understanding of underlying competence permits”). There is no hint in the discussion that this was a situation that was pertinent to the study of language because of the failure of the then current low-level theories. In fact, the discussion clearly suggests that he thinks that is the way to study things in general.


      >>I should also add that Chomsky's initial proposals about how language is organized WAS debatable.

      Of course, I agree with you on that. That's not really up for debate. But that is different from the methodological view Chomsky had. He was clear that the study of Performance should follow the pie carved out by the study of Competence. And clearly, that would have led to a lot of dead-ends in our understanding of Performance systems, if we had used the 60's understanding of competence. So, I am not saying you are wrong in your opinion, but just pointing out that Chomsky's methodological views (at least in Aspects) seem to have been substantially different from yours, on this particular issue.

      Delete
    8. I will say one more thing and then give you the last word. The discussion has been enlightening, at least to me. There are many methodological points that Chomsky has made. One concerns the competence/performance distinction. Others relate to unification. I don't see them as identical. To study how X is being used, its good to have a description of X. To know how X is embodied, it is good to have a description of X. However, which description of X is the right description may require knowing how it is put to use and how it is embodied. To date, our theories of performance and brain structure have been too weak to really constrain our competence theories (descriptions of FoL). This is not the case in vision or auditory perception, I believe. Nor need it forever be so for the domain of language. When that time comes our competence theories may have to change to allow unification. My point is that when (if) that time comes then competence theories will enjoy no epistemological privilege wrt to how unification is to proceed. The two accounts will be on an equal footing; the doctrines of each will need to be accommodated in the unification and neither "side" will wield the stick hand. I think that Chomsky thinks this as well, but if he doesn't I do (but I think he does). Right now, we know a lot more about the structure of Gs then we know about parsing, production, real time acquisition or how brains compute. So right now, the competence theories are more reliable IMO and hold the upper hand argumentatively. I hope that this changes one day.

      Last point: That every performance theory requires a competence account does not imply that a given competence account is epistemically immune to revision. the distinction is IMO irrefutable. Specific versions thereof are not.

      Thx again and I hand you the last word.

      Delete
    9. I myself am most sympathetic to your point of view. What I have had to struggle with (like everyone else) is recognising that there is an amazing amount of subjectivity when it comes to declaring a certain level of description as (even, partially) successful. What counts as a successful description or explanation is something that people don’t seem to agree upon.

      Perhaps, more so than anything else, it is this lack of consensus that drives researchers who don’t necessarily see things the way many generative linguists see to be unwilling to accept looking to modern linguistic results for help/inspiration. Clearly, people like Alex C. (and perhaps many others) don’t share our sense of success in the last 60 years of linguistic description/theorising. If this is indeed true, then asking neurolinguists or others who work on low-level theories to look to generative results is not a winning strategy, rhetorically speaking. In the end, generative linguists will have to enter the discussion of low-level theories with specific proposals and show (if possible) that there is more success with the said strategy. [This is why I find Marr find so inspiring. He might have been wrong in the details. But, he really thought and worked at multiple levels and showed how it could be useful to do so, as a proof of concept]

      Btw, thanks for the last word.

      Delete
  4. Thanks Norbert for your very reasonable comments. Yes, I think you've explained what Chomsky is saying, and why.

    ReplyDelete
  5. I think you're missing the point of generative grammar here, which is that on the basis of experience with their caregivers and others (other children are very important in some societies, perhaps moreso than adults), children pick up something that is enough like a 'grammar' to merit being called one. Indeed, traditional grammars are codifications of what elite kids picked up from their caregivers (perhaps decades or centuries ago), combined with a certain amount of stuff imported from other languages, such the rule about not ending a sentence with a preposition in English.

    ReplyDelete
  6. Avery Andrews, I suspect that the comment you are replying to is disingenuous, a trick to generate traffic to the site linked to at the beginning of the post. Note also that the rest of the discussion is four years old.

    ReplyDelete
    Replies
    1. This is very likely the case, although I tend to think it is better to assume good faith in marginal cases, especially if there are no really interesting discussions going on atm.

      Delete
    2. I believe that there are even bots which will automatically paste in copies of comments or parts of comments from other blog discussions, guided by keywords for relevance, in order to generate traffic for a site (thereby boosting ad revenue). I generally ignore any comment that starts with a link like that one did.

      Delete
    3. Maybe it would be good if this site had a moderator who would simply remove stuff that they judged to be botty, hopefully leaving the 4 most recent sensible human remarks for passers-by to see.

      Delete
  7. Prepare4Test Exin ITILSC-OSA PDF is designed with the latest ITILSC-OSA exam material. All questions are planned and verified by Exin certified experts.

    ReplyDelete