Wednesday, March 20, 2013

Guest Post: Jeffrey Watumull on Postal's Critique of Biolinguistics

Jeff Watumull send me an early version of the following and I immediately asked him if I could post it. It is a vigorous rebuttal of Postal's argument outlined in several of his more recent papers.  An earlier post on a similar topic generated a lot of interest and discussion. Jeff here argues (convincingly in my view) that Platonism and the biolinguistic program are perfectly compatible. If correct, and I will let you dear readers judge for yourselves, there is even less to Postal's Platonist critique of the biolinguistic program than I earlier conceded. Jeff argues that Postal's critique is a non-sequitur even on its Platonist own assumptions. The good news: if you find Platonism appealing you can still be a good biolinguist.  Whew! Thanks Jeff. Enjoy the piece. Oh yes, the post is on the long side. That's the price you pay for a comprehensive critique.




Biolinguistics and Platonism: Contradictory or Consilient?

Jeffrey Watumull
(watumull@mit.edu)

1 Introduction

In “The Incoherence of Chomsky’s ‘Biolinguistic’ Ontology” (Postal 2009), Postal attacks biolinguistics as “junk linguistics” (Postal 2009: 121) with an “awful” (Postal 2009: 114) ontology expounded in “gibberish” (Postal 2009: 118), the “persuasive fore of [which] has been achieved only via a mixture of intellectual and scholarly corruption” (Postal 2009: 104), whereas writings espousing Postal’s ontology “manifest substance and quality of argument at an incomparably higher intellectual level than [Chomsky’s]” (Postal 2009: 105).  As a proponent of biolinguistics, I am tempted to reply in kind to such invective, but to do so would be bad form and bad science.  A fallacy free and dispassionate—if disputatious—rebuttal is necessary and proper.

For Postal, language is a Platonic object, and therefore he concludes that the biolinguistic assumption of a physical basis for language is “absurd” (Postal 2009: 104).  To the contrary, I shall show Postal’s conclusion to be a non sequitur.

By engaging in this argument, I fully expect Postal to accuse me of having “chosen to defend something [i.e., biolinguistics] its own author [i.e., Chomsky] is unwilling to” (Postal 2009: 105), from which two conclusions necessarily follow in Postal’s mind: (i) I am a living testament to Chomsky’s “intellectual and scholarly corruption” of the youth; and (ii) “By exercising his undeniable right of silence here, Chomsky leaves unimpeded the inference that he has not attempted a refutation because he cannot” (Postal 2009: 105).  It goes without saying that I reject these conclusions and the premise from which they do not follow.  (Incidentally, (i) corrupting the young has noble precedents (e.g., a case from 399 BCE) and (ii) the argumentum a silentio is a classic fallacy.)

This is not an apologia for Chomsky.  Biolinguistics has no single author: it is a research program pursued by numerous individually-thinking scientists subordinate to no individual however foundational and influential.  Moreover the theoretical and empirical contributions of the diverse subprograms in which these scientists work are so numerous and important that none can be “dominant” (Postal 2009: 104): in the intersection of cognitive science, linguistics, and the formal sciences, the formal properties and functional architecture of linguistic cognition are being specified; evolutionary biology is investigating possible homologues/analogues of language in nonhuman animals; genetics is discovering some of the genes entering into the development and operation of the language faculty; neuroscience is mapping the physical substrate of linguistic processing; and this is but a subset of the biolinguistics program to “reinstate the concept of the biological basis of language capacities” (Lenneberg 1967: viii).

The subprogram I work in, call it mathematical biolinguistics, is so theoretically and empirically eclectic that I am naturally interested in its ontology.  It therefore cannot be “odd for [Postal’s] opposite in the present exchange to be anyone other than Chomsky” (Postal 2009: 105). 

In the next section I very briefly and very informally define the biolinguistics Postal impugns.  The third section is a rehearsal of Postal’s arguments for linguistic Platonism and ipso facto (so he assumes) against biolinguistics.  I proceed in the fourth section to analyze some of the flaws in these arguments, demonstrating that the ontologies of Platonism and biolinguistics—properly defined—are not mutually exclusive and contradictory, but in fact mutually reinforcing and consilient in a coherent and compelling philosophy of language.

I must add that my work and the ontology it assumes are not representative of all biolinguistic research.  Many would accept my thesis that, just as engineers have encoded abstract software into concrete hardware, evolution has encoded within the neurobiology of Homo sapiens sapiens a formal system (computable functions) generative of an infinite set of linguistic expressions, modulo my understanding of the formal system as a Platonic object.  Nor is mine the only coherent interpretation of biolinguistics.  So it must not be thought that someone with my philosophy is the only possible “opposite [to Postal] in the present exchange.”

2 Biolinguistics

Let the ontology of some research program be defined as “biolinguistic” if it assumes, investigates, and is informed by the biological basis of language—a definition subsuming many productive programs of research in the formal and natural sciences.  But so general a definition cannot adjudicate the case with Postal.  At issue here is the particular definition of biolinguistics that identifies language as I-language—i.e., a computational system (a function in intension) internal to the cognitive/neurobiological architecture of an individual of the species Homo sapiens sapiens—the properties of which are determined by the three factors that enter into the design of any biological system: genetics, external stimuli, and laws of nature.

That Chomsky invented the term I-language and has expatiated on the three factors does not render him the “author” (Postal 2009: 105) of biolinguistics—that would be a category error analogous to attributing “authorship” of evolutionary biology to Darwin given his invention of the term natural selection and expatiation on the factors entering into common descent with modification.  Biolinguistics and evolutionary biology are research programs to investigate objects and processes of nature.  Thus the only author of I-language is nature.  And thus anyone is free to recognize the ontology of biolinguistics as here defined.

3 Platonist Ontology

The incoherence of the biolinguistic ontology is claimed to derive from the fact that “there can be no such thing” (Postal 2009: 105) as biolinguistics, which assumes that “a mentally represented grammar and [the language-specific genetic endowment] UG are real objects, part of the physical world, where we understand mental states and representations to be physically encoded in some manner [in the brain].  Statements about particular grammars or about UG are true or false statements about steady states attained or the initial state (assumed fixed for the species), each of which is a definite real-world object, situated in space-time and entering into causal relations” (Chomsky 1983: 156-157).  To Postal, this ontology is as “absurd” as a “biomathematics” or a “biologic,” for “Were mathematics biological, brain research might resolve such questions as whether Goldbach’s Conjecture is true.  Were logic biological, one might seek grants to study the biological basis of the validity of Modus Ponens.  The ludicrous character of such potential research is a measure of the folly of the idea that these fields study biological things” (Postal 2009: 104, 105).
           
By analogy, Postal argues that the objects of linguistic inquiry are not physical (a fortiori not biological), but rather “like numbers, propositions, etc. are abstract objects, hence things not located in space and time, indeed not located anywhere.  They are also things which cannot be created or destroyed, which cannot cause or be caused.  [Natural languages] are collections of other abstract objects normally called sentences, each of which is a set” (Postal 2009: 105). 

In the paper under consideration, Postal does not expound this ontology (see Postal 2004); a “brief exposition of its essence” (Postal 2009: 106) suffices for his and my purposes.  Essential to the ontology—a form of linguistic Platonism—are the type/token distinction and discrete infinity.

3.1 Types/Tokens

“ES IST DER GEIST DER SICH DEN KORPER BAUT: [S]uch is the nine word inscription on a Harvard museum.  The count is nine because we count der both times; we are counting concrete physical objects, nine in a row.  When on the other hand statistics are compiled regarding students’ vocabularies, a firm line is drawn at repetitions; no cheating.  Such are two contrasting senses in which we use the word word.  A word in the second sense is not a physical object, not a dribble of ink or an incision in granite, but an abstract object.  In the second sense of the word word it is not two words der that turn up in the inscription, but one word der that gets inscribed twice.  Words in the first sense have come to be called tokens; words in the second sense are called types” (Quine 1987: 216-217).

The distinction applies to sentences: for instance, in the classic story by Dr. Seuss, there exist (by my quick count) six tokens of the one type I do not like green eggs and ham.  Postal defines sentence tokens and types as the objects of inquiry for biolinguistics and linguistic Platonism, respectively.  For biolinguistics, as Postal understands it, a sentence is nothing more than a “brain-internal token” (Postal 2009: 107)—a mental representation.  Such an object is defined by spatiotemporal (neurobiological) coordinates with causes (cognitive, chemical, etc.) and effects (e.g., in reasoning and communication).  For linguistic Platonism, as Postal understands it, this physical object is (if anything) a token of an abstract type, with only the latter being really real.  Empirically, “island constraints, conditions on parasitic gaps, binding issues, negative polarity items, etc.” obtain not of physical objects per se, but of abstractions: “Where is the French sentence Ça signifie quoi? — is it in France, the French Consulate in New York, President Sarkozy’s brain?  When did it begin, when will it end?  What is it made of physically?  What is its mass, its atomic structure?  Is it subject to gravity?  Such questions are nonsensical because they advance the false presumption that sentences are physical objects” (Postal 2009: 107).  For Postal this nonsense is nonfinite.

3.2 Discrete Infinity

“[T]he most elementary property of language—and an unusual one in the biological world—is that it is a system of discrete infinity consisting of hierarchically organized objects” (Chomsky 2008: 137).  “Any such system is based on a primitive operation that takes n objects already constructed, and constructs from them a new object: in the simplest case, the set of these n objects” (Chomsky 2005: 11).  Call [the operation] Merge.  Operating without bounds, Merge yields a discrete infinity of structured expressions” (Chomsky 2007: 5). 

Postal invokes the type/token distinction in his critique of this biolinguistic conception of discrete infinity.  He assumes that any object constructed by a physical system must be physical: “Consider a liver and its production of bile, a heart and its production of pulses of blood; all physical and obviously finite.  And so it must be with any cerebral physical production” (Postal 2009: 109).  Thus if language is a physical (neurobiological) system, then its productions (sentences) must be physical (neurobiological tokens).  But physical objects are by definition bounded by the finiteness of spatiotemporal and operational resources: “There is for Chomsky thus no coherent interpretation of the collection of brain-based expressions being infinite, since each would take time and energy to construct, [...] store, process, or whatever[...]; they have to be some kind of tokens” (Postal 2009: 109, 111).  More abstractly, a discretely (denumerably) infinite set is one with expressions (members) that can be related one-to-one with the expressions of one of its subsets (and with the natural numbers).  But if language is a neurobiological system, hence finite, then obviously it cannot contain or construct a set that can be related to the (countable) infinity of natural numbers: “every physical production takes time, energy, etc. and an infinite number of them requires that the physical universe be infinite and, internal to Chomsky’s assumptions, that the brain be” (Postal 2009: 111).  Reductio ad absurdum, supposedly.
           
If biolinguistics implies that expressions are bounded by the spatiotemporal and operational resources of neurobiology, then the (infinite) majority of expressions contained in the discrete infinity are generable only in principle: there exist infinitely many more possible sentences than can ever be generated in the physical universe.  So for the biolinguistic system to be defined as discretely infinite, it must be defined as an idealization: a system abstracted away from the contingent nature of the spatiotemporal and operational resources of neurobiology.  In other words, the biolinguistic system is discretely infinite only if abstracted from biology.  And this, Postal concludes, is the fundamental fallacy: 

If “the biological [Merge function] ‘ideally’ generates an infinite collection, most of the ‘expressions’ in the collection cannot be physical objects, not even ones in some future, and the [natural language] cannot be one either.  [A]lmost all sentences are too complex and too numerous [...] to be given a physical interpretation[...].  In effect, a distinction is made between real sentences and merely ‘possible’ ones, although this ‘possibility’ is unactualizable ever in the physical universe.  According to the biological view, [...] the supposedly ‘possible’ sentences are, absurdly, actually biologically impossible.  Thus internal to this ‘defense’ of Chomsky’s biolinguistic view, the overwhelming majority of sentences cannot be assigned any reality whatever internal to the supposed governing ontology.  This means the ontology can only claim [natural language] is infinite because, incoherently, it is counting things the ontology cannot recognize as real” (Postal 2009: 111).
 
If, however, tokens as physical objects can implement abstract types, then presumably a recursive rule—a finite type—could be tokenized as a procedure in the mind/brain.  This Postal concedes: although “nothing physical is a rule or recursive,” because recursive rules are Platonic, a “physical structure can encode rules” (Postal 2009: 110).  Presumably therefore Merge—the mentally-represented/neurobiologically-implemented recursive procedure posited in biolinguistics to generate discrete infinity—is a legitimate posit.  But Postal objects: “an interpretation of physical things as representing particular abstractions [is] something Chomsky’s explicit brain ontology has no place for” (Postal 2009: 110).  Furthermore, Merge is supposed to generate sets, and sets are Platonic abstractions, but as “an aspect of the spatiotemporal world, [Merge] cannot ‘generate’ an abstract object like a set” (Postal 2009: 114).  So Merge is either biological—not mathematical and hence incapable of generating a set (let alone an infinite one)—or it is mathematical—hence nonbiological but capable of generating discrete infinity.  In sum, language is either physical or it is Platonic, and only under the latter definition can it be predicated of that “most elementary property,” discrete infinity—or so Postal maintains.
   
4 Mathematical Biolinguistic Ontology

Let me affirm at the outset my commitment to mathematical Platonism, which informs my biolinguistic ontology in ways to be discussed.  More strongly than Chomsky, who does grant mathematical Platonism “a certain initial plausibility,” I am convinced of the existence of “a Platonic heaven [of] arithmetic and [...] set theory,” inter alia, that “the truths of arithmetic are what they are, independent of any facts of individual psychology, and we seem to discover these truths somewhat in the way that we discover facts about the physical world” (Chomsky 1986: 33).  It follows from this position that I must be committed to linguistic Platonism for any linguistic objects reducible to or properly characterized as mathematical objects.  And indeed in my theory of natural language (see Watumull 2012), the quiddities that define a system as linguistic are ultimately mathematical in nature.  (The “essence” of language, if you will, is mathematical—a proposition I shall not defend here, assuming it to be essentially correct, for at issue in this discussion is not whether the proposition is true, but whether it is consistent with a biolinguistic ontology if true.) 

4.1 Overlapping Magisteria

I and others (see, e.g., Hauser, Chomsky, Fitch 2002; Watumull, Hauser, Berwick 2013) posit a recursive function generative of structured sets of expressions as central to natural language; this function is defined in intension as internal to the mind/brain of an individual of the species Homo sapiens sapiens.  So conceived, I-language has mathematical and biological aspects. 

Nonsense!  Postal would spout: The ontologies of mathematics and biology are nonoverlapping magisteria!  Assuming mathematical Platonism, I concur that a mathematical object per se such as a recursive function (the type) is not physical.  However even Postal (2009: 110) concedes that such an object can be physically encoded (as a token).  The rules of arithmetic for instance are multiply realizable, from the analog abacus to the digital computer to the brain; mutatis mutandis for other functions, sets, etc.  And mutatis mutandis for abstract objects definable as mathematical at the proper level of analysis, such as a computer program:

            “You know that if your computer beats you at chess, it is really the program that has beaten you, not the   silicon atoms or the computer as such.  The abstract program is instantiated physically as a high-level  behaviour of vast numbers of atoms, but the explanation of why it has beaten you cannot be expressed without also referring to the program in its own right.  That program has also been instantiated, unchanged, in a long chain of different physical substrates, including neurons in the brains of the programmers and radio waves when you downloaded the program via wireless networking, and finally as states of long- and   short-term memory banks in your computer.  The specifics of that chain of instantiations may be relevant to     explaining how the program reached you, but it is irrelevant to why it beat you: there, the content of the     knowledge (in it, and in you) is the whole story.  That story is an explanation that refers ineluctably to abstractions; and therefore those abstractions exist, and really do affect physical objects in the way required by the explanation” (Deutsch 2011: 114-115).
 
(Though I shall not rehearse the argument here, I am convinced by Gold (2006) that “mathematical objects may be abstract, but they’re NOT [necessarily] acausal” because they can be essential to—ineliminable from—causal explanations.  The potential implications of this thesis for linguistic Platonism are not uninteresting.)

I take the multiple realizability of the chess program to evidence the reality of abstractions as well as anything can (and I assume Postal would agree): something “substrate neutral” (Dennett 1995) is held constant across multiple media.  That something I submit is a computable function; equivalently, that constant is a form of Turing machine (the mathematical abstraction representing the formal properties and functions definitional of—and hence universal to—any computational system). 

4.2 The Linguistic Turing Machine

Within mathematical biolinguistics, it has been argued that I-language is a form of Turing machine (see Watumull 2012; Watumull, Hauser, Berwick 2012), even by those Postal diagnoses as allergic to such abstractions: 

“[E]ven though we have a finite brain, that brain is really more like the control unit for an infinite computer.  That is, a finite automaton is limited strictly to its own memory capacity, and we are not.  We are like a Turing machine in the sense that although we have a finite control unit for a brain, nevertheless we can use indefinite amounts of memory that are given to us externally[, say on a “tape,”] to perform more and more complex computations[...].  We do not have to learn anything new to extend our capacities in this way” (Chomsky 2004: 41-42).

As Postal would observe, this “involves an interpretation of physical things as representing particular abstractions,” which he concedes is coherent in general because obviously “physical structure can encode rules” and other abstract objects (e.g., recursive functions) (Postal 2009: 110)—computer programs, I should say, are a case in point.

4.3 Idealization

Postal (2012: 18) has dismissed discussion of a linguistic Turing machine as “confus[ing] an ideal machine[...], an abstract object, with a machine, the human brain, every aspect of which is physical.”  I-language qua Turing machine is obviously an idealization, with its unbounded running time and access to unbounded memory, enabling unbounded computation.  And obviously “[unboundedness] denotes something physically counterfactual as far as brains and computers are concerned.  Similarly, the claim ‘we can go on indefinitely’ [...] is subordinated to the counterfactual ‘if we just have more and more time.’  Alas we do not, so we can’t go on indefinitely” (Postal 2012: 18).  Alas it is Postal who is confused.

4.3.1 Indefinite Computation
           
Postal’s first confusion is particular to the idealization of indefinite computation.  Consider arithmetic.  My brain (and presumably Postal’s) and my computer encode a program (call it ADD) that determines functions of the form fADD(X + Y) = Z (but not W) over an infinite range.  Analogously, my brain (and Postal’s) but not (yet) my computer encodes a program (call it MERGE) that determines functions of the form fMERGE(α, β) = {α, β}—with syntactic structures assigned definite semantic and phonological forms—over an infinite range.  These programs are of course limited in performance by spatiotemporal constraints, but the programs themselves—the functions in intension—retain their deterministic form even as physical resources vary (e.g., ADD determines that 2 + 2 = 4 independent of performance resources).  

Assuming a mathematical biolinguistic ontology, I-language is a cognitive-neurobiological token of an abstract type; it “generates” sets in the way axioms “generate” theorems.  As the mathematician Gregory Chaitin observes, “theorems are compressed into the axioms” so that “I think of axioms as a computer program for generating all theorems” (Chaitin 2005: 65).  Consider how a computer program explicitly representing the Euclidean axioms encodes only a finite number of bits; it does not—indeed cannot—encode the infinite number of bits that could be derived from the postulates, but it would be obtuse to deny that such an infinity is implicit (compressed) in the explicit axioms.  Likewise, zn+1 = zn2 + c defines the Mandelbrot set (as I-language defines the set of linguistic expressions) so that the infinite complexity of the latter really is implicitly represented in the finite simplicity of the former. 

So while it is true that physically we cannot perform indefinite computation, we are endowed physically with a competence that does define a set that could be generated by indefinite computation.  (A subtle spin on the notion of competence perhaps more palatable to Postal defines it as “the ability to handle arbitrary new cases when they arise” such that “infinite knowledge” defines an “open-ended response capability” (Tabor 2009: 162).)  Postal must concede the mathematical truth that linguistic competence, formalized as a function in intension, does indeed define an infinite set.  However, he could contest my could as introducing a hypothetical that guts biolinguistics of any biological substance, but that would be unwise. 

Language is a complex phenomenon: we can investigate its computational (mathematical) properties independent of its biological aspects just as legitimately as we can investigate its biological properties independent of its social aspects (with no pretense to be carving language at its ontological joints).  In each domain, laws—or, at minimum, robust generalizations—license counterfactuals (as is well understood in the philosophy of science).  In discussing indefinite computation, counterfactuals are licensed by the laws expounded in computability theory:

“[T]he question whether a function is effectively computable hinges solely on the behavior of that function in neighborhoods of infinity[...].  The class of effectively computable functions is obtained in the ideal case where all of the practical restrictions on running time and memory space are removed.  Thus the class is a theoretical upper bound on what can ever in any century be considered computable” (Enderton 1977: 530).

A theory of linguistic competence establishes an “upper bound,” or rather delineates the boundary conditions, on what can ever be considered a linguistic pattern (e.g., a grammatical sentence).  Some of those patterns extend into “neighborhoods of infinity” by the iteration of a recursive function.  Tautologically, those neighborhoods are physically inaccessible, but that is irrelevant.  What is important is the mathematical induction from finite to infinite: Merge applies to any two arguments to form a a set containing those two elements such that its application can only be bounded by stipulation.  In fact a recursive function such as Merge characterizes the “iterative conception of a set,” with sets of discrete objects “recursively generated at each stage,” such that “the way sets are inductively generated” is formally equivalent to “the way the natural numbers [...] are inductively generated” (Boolos 1971: 223). 

The natural numbers are subsumed in the computable numbers, “the real numbers whose expressions as a decimal are calculable by finite means” (Turing 1936: 230).  (The phrase “finite means” should strike a chord with many language scientists.)  It was by defining the computable numbers that Turing proved the coherency of a finitary procedure generative of an infinite set.

“For instance, there would be a machine to calculate the decimal expansion of Ï€[...].  Ï€ being an infinite decimal, the work of the machine would never end, and it would need an unlimited amount of working space on its ‘tape’.  But it would arrive at every decimal place in some finite time, having used only a finite quantity of tape.  And everything about the process could be defined by a finite table[...].  This meant that [Turing] had a way of representing a number like Ï€, an infinite decimal, by a finite table.  The same would be true of the square root of three, or the logarithm of seven—or any other number defined by some rule” (Hodges 1983: 100).

Though they have not been sufficiently explicitly acknowledged as such, Turing’s concepts are foundational to the biolinguistic program.  I-language is a way of representing an infinite set by a finite table (a function).  The set of linguistic expressions being infinite, “the work of the machine would never end,” but Postal must concede that nevertheless I-language “would arrive at every [sentence] in some finite time, having used only a finite quantity of tape.  And everything about the process could be defined by a finite table.”  This gives a rigorous sense to the linguistic notion “infinite use of finite means.”  

4.3.1.1 Generation and Explanation

But for all the foregoing, the finitude/infinitude distinction is not so fundamental given the fact that “[a] formal system can simply be defined to be any mechanical procedure for producing formulas” (Gödel 1934: 370).  The infinitude of the set of expressions generated is not as fundamental as the finitude of I-language (the generative function) for the following reason: it is only because the function is finite that it can enumerate the elements of the set (infinite or not); and such a compact function could be—and ex hypothesi is—neurobiologicall encoded.  Even assuming Postal’s ontology in which “[natural languages] are collections of [...] abstract objects” (Postal 2009: 105), membership in these collections is granted (and thereby constrained) by the finitary procedure, for not just any (abstract) object qualifies.  In order for an object to be classified as linguistic, it must be generated by I-language; in other words, to be a linguistic object is to be generated by I-language.  And thus I-language explains why a given natural language contains the member expressions it does. 
           
This notion of I-language as explanation generalizes to the notion of formal system as scientific theory:

“I think of a scientific theory as a binary computer program for calculating observations, which are also written in binary.  And you have a law of nature if there is compression, if the experimental data is compressed into a computer program that has a smaller number of bits than are in the data that it explains.  The greater the degree of compression, the better the law, the more you understand the data.  But if the experimental data cannot be compressed, if the smallest program for calculating it is just as large as it is [...], then the data is lawless, unstructured, patternless, not amenable to scientific study, incomprehensible. In a word, random, irreducible” (Chaitin 2005: 64).

This notion is particularly important, as Turing (1954: 592) observed, “[w]hen the number is infinite, or in some way not yet completed [...],” as it is for the discrete infinity (unboundedness) of language; “a list of answers will not suffice.  Some kind of rule or systematic procedure must be given.”  Otherwise the list is arbitrary and unconstrained.  So for linguistics, in reply to the question “Why does the infinite natural language L contain the expressions it does?” we answer “Because it is generated by the finite I-language f.”  Thus I-language can be conceived of as the theory explicative of linguistic data because it is the mechanism (Turing machine) generative thereof.

4.3.2 “the thing in itself”

Second, with respect to idealization generally, for mathematical biolinguistics to have defined I-language as a Turing machine is not to have confused the physical with the abstract, but rather to have abstracted away from the contingencies of the physical, and thereby discovered the mathematical constants that must of necessity be implemented for any system—here biological—to be linguistic (on my theory).  This abstraction from the physical is part and parcel of the methodology and, more importantly, the metaphysics of normal science, which proceeds by the “making of abstract mathematical models of the universe to which at least the physicists give a higher degree of reality than they accord the ordinary world of sensation” (Weinberg 1976: 28).  The idealization is the way things really are.  Consider Euclidean objects: e.g., dimensionless points, breadthless lines, perfect circles, and the like.  These objects do not exist in the physical world.  The points, lines, and circles drawn by geometers are but imperfect approximations of abstract Forms—the objects in themselves—which constitute the ontology of geometry.  For instance, the theorem that a tangent to a circle intersects the circle at a single point is true only of the idealized objects; in any concrete representation, the intersection of the line with the circle cannot be a point in the technical sense as “that which has no part,” for there will always be some overlap.  As Plato understood (Republic VI: 510d), physical reality is an intransparent and inconstant surface deep beneath which exist the pellucid and perfect constants of reality, formal in nature:

            “[A]lthough [geometers] use visible figures and make claims about them, their thought isn’t directed to them but to the originals of which these figures are images.  They make their claims for the sake of the Square itself and the Diagonal itself, not the particular square and diagonal they draw; and so on in all cases.  These figures that they make and draw, of which shadows and reflections in water are images, they now in turn use as images, in seeking to behold those realities—the things in themselves—that one cannot comprehend except by means of thought.”

Analogously, any particular I-language (implemented in a particular mind/brain) is an imperfect representation of a form (or Form) of Turing machine.  But, Postal would object, the linguistic Turing machine is Platonic, hence nonbiological, and hence bio-linguistics is contradictory.  But, I should rebut, this objection is a non sequitur.
           
I am assuming (too strongly perhaps) that fundamentally a system is linguistic in virtue of mathematical (nonbiological) aspects.  Nevertheless, in our universe, the only implementations of these mathematical aspects yet discovered (or devised) are biological; indeed the existence of these mathematical systems is known to us only by their biological manifestations—i.e., in our linguistic brains and behaviors—which is reason enough to pursue bio-linguistics.  To borrow some rhetorical equipment, biology is the ladder we climb to the “Platonic heaven” of linguistic Forms, though it would be scientific suicide to throw the ladder away once up it.  That chance and necessity—biological evolution and mathematical Form—have converged to form I-language is an astonishing fact in need of scientific explanation.  It is a fact that one biological system (i.e., the human brain) has encoded within it and/or has access to Platonic objects.  (Postal must assume that our finite brains can access an infinite set of Platonic sentences.  The ontological status of the latter is not obvious to me, but obviously I am committed to the existence of the encoding within our brains of a finite Platonic function for unbounded computation.)  Surely a research program formulated to investigate this encoding/access is not perforce incoherent.

I do however deny any implication here that such complex cognition, “in some most mysterious manner, springs only from the organic chemistry, or perhaps the quantum mechanics, of processes that take place in carbon-based biological brains.  [I] have no patience with this parochial, bio-chauvinistic view[:] the key is not the stuff out of which brains are made, but the patterns that can come to exist inside the stuff of a brain” (Hofstadter 1999 [1979]: P-4, P-3).  Thus, as with chess patterns, it is not by necessity that linguistic patterns spring from the stuff of the brain; but the fact remains that they can and do.  And thus linguistics is just as much a biological science as it is a formal science.   

To reiterate, at present there exists no procedure other than human intuition to decide the set of linguistic patterns.  The neurobiology cannot answer the question whether some pattern is linguistic (e.g., whether some sentence is grammatical), but it encodes the procedure that enables the human to intuit the answer to such a question.  Analogously, neurobiological research would not establish the truth of Goldbach’s conjecture or the validity of reasoning by modus ponens, but rather would be unified with research in cognitive science to establish (discover) the rules and representations encoded neurobiologically that enable cognitive conjecture and reasoning.

4.4 Encoding Abstract Objects in Physical Systems

Postal believes that an “explicit brain ontology” as assumed in biolinguistics “has no place for” the encoding of an abstract object such as a Turing machine in a physical system such as the brain—but I see no grounds whatsoever for this belief.  Not only is this belief contradicted by Chomsky’s Turing machine analogy, but Postal himself quotes Chomsky discussing how in biolinguistics “we understand mental states and representations to be physically encoded in some manner” (1983: 156-157); and to physically encode something presumes a non-physical something to be so encoded.  For this reason “it is the mentalistic studies that will ultimately be of greatest value for the investigation of neurophysiological mechanisms, since they alone are concerned with determining abstractly the properties that such mechanisms must exhibit and the functions they must perform” (Chomsky 1965: 193).

It is in this sense of neurobiology encoding mathematical properties and functions that, “astonishingly” (Postal 2012: 23), we observe the obvious fact that “We don’t have sets in our heads.  So you have to know that when we develop a theory about our thinking, about our computation, internal processing and so on in terms of sets, that it’s going to have to be translated into some terms that are neurologically realizable.  [Y]ou talk about a generative grammar as being based on an operation of Merge that forms sets, and so on and so forth.  That’s something metaphorical, and the metaphor has to be spelled out someday” (Chomsky 2012: 91).  In other words, while the formal aspects of a Turing machine (e.g., Merge, sets, etc.) are, ex hypothesi, realized neurologically, it would be absurd (astonishing) to expect physical representation of our arbitrary notations (e.g., fMERGE(X, Y) = {X, Y}).  As Turing observed, in researching the similarities of minds and machines, “we should look [...] for mathematical analogies of function” (1950: 439)—similarities in software, not hardware.

Of course an ontological commitment to abstract properties and functions is not necessarily a commitment to Platonism (as Aristotle demonstrated and many in the biolinguistics program would argue), but it is certainly the default setting.  So it can be argued that I-language is just like Deutsch’s chess program: a multiply realizable computable function (or system of computable functions).  Indeed given that a Turing machine is a mathematical abstraction, I-language qua Turing machine is necessarily and properly defined as a physically (neurobiologically) encoded Platonic object.
           
5 Conclusion

I have argued that mathematical biolinguistics is based on the perfectly coherent concept of computation—as formulated by Turing—unifying mathematical Platonism and biolinguistics: evolution has encoded within the neurobiology of Homo sapiens sapiens a formal system (computable function(s)) generative of an infinite set of linguistic expressions (just as engineers have encoded within the hardware of computers finite functions generative of infinite output).  This thesis, I submit, is or would be accepted by the majority of researchers in biolinguistics, perhaps modulo the Platonism, for indeed it is not necessary to accept the reality of mathematical objects to accept the reality of physical computation.  However, I am a mathematical Platonist, and thus do recognize the reality of mathematical objects, and thus do argue I-language to be a concretization (an “embodiment” in the technical sense) of a mathematical abstraction (a Turing machine), which to my mind best explains the design of language.

References

Boolos, George S. 1971. The iterative conception of set. The Journal of Philosophy 68: 215-231.
Chaitin, Gregory. 2005. Meta Math: The Quest for Omega. New York: Pantheon Books.
Chomsky, Noam. 1965. Aspects of the Theory of Syntax. Cambridge, MA: MIT Press.
Chomsky, Noam. 1983. Some conceptual shifts in the study of language. In How Many
Questions? Essays in Honor          of Sidney Morgenbesser, ed. by Leigh S. Cauman, Isaac Levi, Charles D. Parsons, and Robert Schwartz,   154-169. Indianapolis, IN: Hackett Publishing Company.
Chomsky, Noam. 1986. Knowledge of Language: Its Nature, Origins and Use. New York:
Praeger.
Chomsky, Noam. 2004. The Generative Enterprise Revisited: Discussions with Riny Huybregts,
Henk van Riemsdijk, Naoki Fukui, and Mihoko Zushi. The Hague: Mouton de Gruyter.
Chomsky, Noam. 2005. Three factors in language design. Linguistic Inquiry 36: 1-22.
Chomsky, Noam. 2007. Approaching UG from below. In Interfaces + Recursion = Language?:
Chomsky’s Minimalism and the View from Syntax-Semantics, ed. by Uli Sauerland and Hans-Martin Gartner, 1-29. Berlin: Mouton de Gruyter.
Chomsky, Noam. 2008. On phases. In Foundational Issues in Linguistic Theory: Essays in
         Honor of Jean-Roger         Vergnaud, ed. by Robert Freiden, Carlos P. Otero, and Maria Luisa Zubizarreta, 133-136. Cambridge: MIT             Press.
Chomsky, Noam. 2012. The Science of Language: Interviews with James McGilvray.
Cambridge: Cambridge University Press.
Dennett, Daniel C. 1995. Darwin’s Dangerous Idea: Evolution and the Meanings of Life.
London: Penguin Books.
Deutsch, David. 2011. The Beginning of Infinity. London: Allen Lane.
Enderton, Herbert B. 1977. Elements of recursion theory. In Handbook of Mathematical Logic,
ed. by Jon Barwise, 527-566. Amsterdam: North-Holland.
Gödel, Kurt. 1934 [1965]. On undecidable propositions of formal mathematical systems. In The
Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems and Computable Functions, ed. by Martin Davis, 39-74. New York: Raven Press.
Gold, Bonnie. 2006. Mathematical objects may be abstract—but they’re NOT acausal! Ms.,
Monmouth University.
Hauser, Marc D., Noam Chomsky, and W. Tecumseh Fitch. 2002. The faculty of language: What
is it, who has it, and how did it evolve? Science 298: 1569-1579.
Hodges, Andrew. 1983. Alan Turing: The Enigma. New York: Simon and Schuster.
Hofstadter, Douglas R. 1999 [1979]. Gödel, Escher, Bach: An Eternal Golden Braid. New York:
Basic Books.
Lenneberg, Eric H. 1967. Biological Foundations of Language. New York: Wiley.
Postal, Paul M. 2004. Skeptical Linguistic Essays. New York: Oxford University Press.
Postal, Paul M. 2009. The incoherence of Chomsky’s ‘biolinguistic’ ontology. Biolinguistics 3:
104-123.
Postal, Paul M. 2012. Chomsky’s foundational admission. http://ling.auf.net/lingbuzz/001569
Quine, Williard van Orman. 1987. Quiddities. Cambridge: Harvard University Press.
Tabor, Whitney. 2009. Dynamical insight into structure in connectionist models. In Toward a
Unified Theory of            Development: Connectionism and Dynamic System Theory Re-Considered, ed. by John P. Spencer, Michael S.C. Thomas, and James L. McClelland, 165-181. Oxford: Oxford University Press.
Turing, Alan M. 1936. On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society 42: 230-265.
Turing, Alan M. 1954. Solvable and unsolvable problems. Science News 31: 7-23.
Watumull, Jeffrey. 2012. A Turing program for linguistic theory. Biolinguistics 6.2: 222-245.
Watumull, Jeffrey, Marc D. Hauser, and Robert C. Berwick. 2013. Comparative evolutionary
approaches to      language: On theory and methods. Ms., Massachusetts Institute of Technology.
Weinberg, Steven. 1976. The forces of nature. Bulletin of the American Academy of Arts and
Sciences 29: 13-29.



64 comments:

  1. My first reading suggests that, unfortunately, Watumull has not shown "that Platonism and the biolinguistic program are perfectly compatible." If one accepts his alleged rebuttal one ends up with the 'worst of both worlds':

    [i] accepting his version of Platonism forces us to accept that not even the 'completed neuroscience of the future' can give a full account of linguistics [so all good naturalists must be disappointed by that].

    And [ii] accepting that I-language “would arrive at every [sentence] in some finite time, having used only a finite quantity of tape. And EVERYTHING about the process could be defined by a finite table” would eliminate linguistic creativity from our cognitive repertoire, and we would be the automatons Descartes argued convincingly human minds are not.

    The post is indeed on the long side but more than half of the paper is irrelevant digression that reveals, among other things, a serious misunderstanding of fairly basic points that have nothing to do with Platonism (e.g. that by 'Chomsky's biolinguistics' Postal just means ‘the biolinguistics proposed by Chomsky’).

    Regarding substance, we have not seen the squaring of the circle but witnessed the same kind of conflation that has plagued Chomsky's biolinguistics. Take this analogy:

    "Assuming a mathematical biolinguistic ontology, I-language is a cognitive-neurobiological token of an abstract type; it “generates” sets in the way axioms “generate” theorems. ... Likewise, zn+1 = zn2 + c defines the Mandelbrot set (as I-language defines the set of linguistic expressions) so that the infinite complexity of the latter really is implicitly represented in the finite simplicity of the former."

    The problem here is of course that zn+1 = zn2 + c is a mathematical function [type] but the analogue [I-language] is supposedly a cognitive-neurobiological token. And this is at the heart of Postal's criticism. Like Chomsky Watumull needs I-language to be both: an abstract type and a physical token. But it can only be one or the other. The moment the ‘software’ IS implemented in a physical brain, hardware [or 'wetware'] matters a great deal and all the talk of multiple realizability becomes irrelevant.

    Involuntarily apt is this analogy: "biology is the ladder we climb to the “Platonic heaven” of linguistic Forms, though it would be scientific suicide to throw the ladder away once up it." One is reminded of the ill-fated flight of Icarus who paid the ultimate price for 'abstracting away' from the physical properties of wax and feathers. In biolinguistics abstracting away 'works' as long as we talk ABOUT brains. But we cannot abstract away the fact that brains ARE physical objects. It is possible to argue coherently that brains generate a minute fraction of tokens of possible sentences but that leaves most of language unaccounted for. And Katz has shown decades ago why resorting to 'potential generation' cannot succeed.

    ReplyDelete
  2. Christina -- a lot of these arguments seem to apply also as criticisms of thinking of computers as Turing machines -- do you agree, or if not, what is the difference?
    We think of a (real, physical token) computer as implementing an (abstract, type) program. Is that mistaken in your view?

    ReplyDelete
  3. @ Alex: Remember, i am no Platonist myself. But as I understand the view, the short answer is 'yes' the arguments apply to physical computers as well: you can of course talk about 'abstract turing machines' if you refer to a set of mathematical operations but once you implement a computer program in a physical computer it [the computer] cannot generate an infinite output [as discussed by Postal http://ling.auf.net/lingbuzz/001608, and Katz&Postal http://ling.auf.net/lingbuzz/001607]. Probably the best paper on the topic though is Katz, J. (1996). The unfinished Chomskyan revolution. Mind and Language 11, 270-294. [I was very surprised that as self-proclaimed Platonist Watumull did not cite this]

    BTW, I find the talk of 'multiple realizability’ tends to confuse matters because it seems to lead to some people forgetting that saying 'it does not matter in what kind of Turing machine you implement the program in' is very different from saying the properties of the Turing machine in which you DO implement the program do not matter.

    Also note that Watumull repeatedly talks about abstract objects as abstractions. But that is confusing the technical senses of these terms. An abstraction involves starting with certain properties [usually of concrete objects] and setting them aside. But this process will not turn a concrete object into an abstract object. No properties are set aside when one talks about abstract objects like integers.

    ReplyDelete
    Replies
    1. Well I think it is much easier to see the problems with this argument when you take it as an argument against computer science rather than as an argument against biolinguistics.
      Not least because the achievements of computer science are rather more significant than those of biolinguistics.
      And in CS no-one is confused about the difference between a turing machine and a physical computer nor about the relationship between these two objects.

      Do you accept that Postal's arguments do not go through against Computer Science?

      (I don't think your view of abstraction and abstract objects is very standard -- one way of thinking of natural numbers is as abstractions of collections of objects. )

      Delete
    2. I admit, at times I am not sure what you are asking. To me the ontological issue seems so clear that I am wondering if you honestly do not see what the issue that Postal raises is, or if I misunderstand what problem you envision with Postal’s view.

      No one denies that a computer (or a brain) undergoes certain changes in physical state that eventually yield a final state that is corresponding to a certain abstract object [but obviously this final state is not an abstract object]. So if this is what you mean by 'implementing a program', then this is not the issue. Rather, the point is that the brain doesn't embody that abstract object; the brain embodies a biophysical state that has some kind of relationship to the abstract object. So we can say that the brain is able to move through a series of electrochemical states which correspond to solving an algebraic equation. But then we have to ask: are those brain states the equation/solution steps (stated as a series of equalities) the solution of the equation? And the same holds for the changes in electrochemical states in a computer (or physical position of the beads in an abacus, or ....).

      As far as implementing computer programs that could be relevant to linguistics is concerned, you know much better than me that it is trivial to write an iterative program that loops until the machine is interrupted by some physical limitation, displaying say "Hello world, Hello world, .... Hello world,.... " with the display auto-refreshing. The key here is that only a fixed amount of information has to be kept in memory and if that amount is below the physical limits of the machine, then the algorithm will produce something as long as the machine is running. You also know that recursive programs are somewhat different and that Head recursion requires a stack, even if implemented iteratively. Hence, the physical limits of the HW memory would eventually be exceeded and the machine would 'stop' producing output even if it has not run out of electricity or broken down due to wear on some of its parts or.... Now you may appeal to "appropriate levels of abstraction" or to "idealization", e.g. expandable memory, or for brains, counterfactually expandable memory, etc. That is fine as long as you’re talking about a model but when you move back from a model and talk about real brains or real computers there will be a, perhaps difficult to specify but definitely finite upper bound. And as far as biolinguistics is concerned; language is in the brain - this limits the kinds of abstractions you should apply...

      Regarding abstract objects, I use the term as Postal & Katz do - if I want to try to understand their view, that is what I have to do. Katz provides some detailed discussion of this issue [including about 'impure sets] and I think it really is better you read what he wrote. All I can say is if you actually want to understand what the issues are that Katz and Postal raise, you have to be willing to at least consider that they could be right. They may be wrong but they are not as obviously wrong as Watumull seems to think or as you seem to imply when you say: " ... it is much easier to see the problems with this argument when you take it as an argument against computer science...Not least because the achievements of computer science are rather more significant than those of biolinguistics."

      Apparent or even actual success does not always imply that your model is correct. Remember: Ptolemy provided astronomical calculations to compute the positions of Sun, Moon & planets, the rising and setting of the stars, and eclipses of the Sun and Moon that were quite 'successful' long beyond his life time. This however did not mean the Ptolemy's geocentric model was correct. My point is that you can do a lot of successful work in computer science without having to worry about your ontological framework. And if this kind of success is what is important to you, then you probably should not worry about ontology.

      Delete
    3. There is an obvious and important distinction between a false empirical claim (the sun goes round the earth) and an idealised model (incompressible air in aerodynamics, the infinite tape of a turing machine).

      The question is not whether computers are finite (they are not), but whether Turing machines are a useful abstraction. My claim is that the demonstrable success of theoretical computer science in the design of real systems is an incontrovertible demonstration of the utility of the counterfactual idealisation.
      Now you are quire right that real computers have finite amounts of storage, and so at some point one needs to consider this fact. And indeed systems programmers and so on often use models that are much closer to physical computers. This doesn't mean that TCS is incoherent, rather we use models that abstract away from different things at different times. This is a very standard thing.

      Now I don't think there is a definite finite bound -- for each physical system there is a finite bound, but that may fluctuate (as space becomes available, as components/cells break or die), and in any event it may differ between very similar organisms/systems. So there are good reasons to consider the boundedness of these systems in a different way: i.e. not considering a Turing machine with a tape of length 137, say, but using complexity theory, which looks at asymptotic bounds on the finite resources used by an infinite machine.
      Similar arguments apply I think to cognitive systems in which context I recommend the clear and helpful discussion in "The tractable cognition thesis", Van Rooij, Iris 2008.


      Metaphysical concerns about the status of abstract objects don't seem relevant here, any more than arguments about Platonism in mathematics are relevant to the concerns of working mathematicians.

      But I am very willing to admit that I might be wrong -- I just haven't seen a good argument yet against this standard model of idealisation.

      (I don't find the biolinguistic rhetoric very helpful in making these distinctions clearer, since the competence/performance distinction gets mixed in early on. and there are systematic confusions between internal grammars and models of internal grammars).

      You didn't answer my question, which I put again: Do you accept that Postal's arguments do not go through against Computer Science?

      Delete
    4. Before I can answer your question, can you please clarify, if you really meant to say

      [1] The question is not whether computers are finite (they are not),

      or if this was a typo and you intended to say either [2] or [3], as I think you did:

      [2] The question is not whether computers are finite (they are)
      [3] The question is not whether computers are infinite (they are not),

      Delete
    5. yes sorry -- quick typing and no proof-reading!

      Delete
    6. This comment has been removed by the author.

      Delete
    7. Thanks for the clarification. I do not question the utility of the counterfactual idealization. The question for any counterfactual idealization is what claims are being made in a given case and whether they actually provide applicable results, or whether they might be misleading, actually taking one away from understanding the domain of study. I agree that in the computer case idealizations used are less problematic than in the language case [see also my reply to Avery]

      I agree with you that the TCM idealization is not incoherent. But the TM idealization is at a general level, where one is interested in mathematical worst / best case scenarios (performance) irrespective of physical limits, physical structure or organization. Understanding of very ordinary algorithms like sorting algorithms, searching algorithms comes from such studies. But the analogy here is that the TCM idealization is on par with formal linguistics and not with biological studies and no biological claims can be made.

      Now to your question. You ask: Do you accept that Postal's arguments do not go through against Computer Science?

      The arguments do not apply as long as people in computer science keep their claims and domains of study straight. As long as they do not claim that the mathematical bounds on parsers or using recursive grammars to describe some data entails anything about physical computers or that mathematical studies are studies of physical systems. But if anyone claims [as we have clarified you do not!] that from the fact that the idealized TM can have an infinite output it follows that a physical computer can have such output then obviously Postal’s arguments do apply.

      Delete
    8. Do I understand you correctly that you say that any "physically implemented" program (i.e. anything actually being realized in a physical structure, such as a computer's memory), can never be properly said to be the program the programmer intended because any physical computer will only have finite memory?
      But that saying that the physical implementation _represents_ an abstract entity, the actual program, is the proper way of speaking?

      Delete
    9. No you do not understand me correctly and I cannot fathom why you would think I [or anyone for that matter] would believe something that silly? Obviously a program that requires the computer to have an infinite memory could not run on any actual computer. But I cannot imagine any programmer 'intending' that the computer on which his/her program runs has an infinite memory - can you?

      Delete
    10. Where did I talk about a program requiring infinite memory?! Take a simple program for calculating the Fibonacci numbers using tree-recursion --- as far as I'm concerned, this program yields an answer for any n, and that is clearly the intention of the programmer.
      Yes, in practice, it'll run out of memory rather quickly and will refuse to give an answer for any input larger than some finite number. But this limitation is fully external to the program --- you can code it up, compile it, and without making any changes to the program make changes to the environment you execute it in (say, make available more memory), allowing it to return answers for larger numbers before running out of memory.
      So to rephrase the question more concretely, in this case it wouldn't be proper to say that there literally _is_ a program in the computer that calculates Fibonacci-numbers using tree-recursion, but it would be okay to say that there is some representation of the abstract program? I'm simply at a loss as to how you would like talk about computer programs (and ultimately, of course, cognitive abilities) to be phrased. I'm afraid I misconstrued your views, so please do help me out.

      Delete
    11. First I think we agree that there is a difference between 'the simple program for calculating Fibonacci [F] numbers' and the computer on which the program eventually is implemented. So the case is different from what Chomsky claims for Natural Language [which is what i am ultimately interested in while you seem to be more interested in computer science issues?]

      Now you ask: "in this case it wouldn't be proper to say that there literally _is_ a program in the computer that calculates Fibonacci-numbers using tree-recursion"

      Right, it would not literally be proper to say this unless you mean that the executing code is a description of an algorithm (abstract) that generates in the mathematical sense the entire set. Now in many cases you probably can ignore this fine point because the conflation does not cause any trouble. But when one talks about brains with fixed capacities, the situation changes. For one thing, there is no clear HW/SW distinction in the brain; memory is more or less fixed (at least we can make that idealization here). The right analogy would be to a special purpose chip that realizes the algorithm along with the memory limitations.

      Now would you say that even in that scenario, closer to brains, the chip literally calculates the F numbers? Or would you say that the chip is a HW realization of an algorithm that generates the F numbers?

      Delete
    12. This comment has been removed by the author.

      Delete
    13. "Right, it would not literally be proper to say this unless you mean that the executing code is a description of an algorithm (abstract) that generates in the mathematical sense the entire set."
      Going down this road simply leads to more vexing questions --- in virtue of what is the executing code then a description of that algorithm rather than an ad-hoc Kripkensteinian algorithm that calculates the Quibonacci-numbers?
      Part of the ingenuity of Chomsky's refusal to construe his view in terms of representations with representational content is that he entirely side-steps (or at least, tries to side-step) the problem of intentionality (something that becomes very clear in his discussion(s) with Georges Rey).
      I'm not saying there are no problems with Chomsky's views but what has occupied so much of this blog lately strikes me as utterly unconvincing.

      "For one thing, there is no clear HW/SW distinction in the brain; memory is more or less fixed (at least we can make that idealization here)."
      Well, how do we know? Perhaps our current view of the brain simply doesn't allow us to come up with one; quite frankly, our current view of the brain also doesn't allow us to individuate representational states properly. What lessons you draw from this is mostly up to you (Quine chose to avoid talk of representations, Chomsky chose to be skeptical about our current knowledge about the brain), but "going platonic" doesn't seem to help a bit, as far as I'm concerned. At least not if you're interested in trying to explain how humans can do what they do which, if I understand correctly, is what Chomsky (and his fellow bio-linguists) are interested in.
      Whether Postal's ontology is better suited to his own goals in linguistic theory I don't know and probably can't properly judge but I also couldn't care less. And I still fail to see any _incoherence_ with Chomsky's methodological naturalism.

      Delete
    14. Just a couple of questions:

      1. If I wrote the program to calculate the F numbers and implemented it in a computer and the computer performs as expected why would I now reject the explanation i gave above and worry about Kripkensteinian puzzles? BTW, for replies to Kripke & Wittgenstein style rule-following puzzles i recommend Katz - he has addressed them in detail and at least to me his arguments are convincing.

      2. "For one thing, there is no clear HW/SW distinction in the brain; memory is more or less fixed (at least we can make that idealization here)."
      Well, how do we know? Perhaps our current view of the brain simply doesn't allow us to come up with one

      You are right that currently we do not know much about how the brain works. So what makes you think then that Chomsky's proposal is on the right track? If he has novel insights about brain properties one would assume he would share them with us? If he is just as much in the dark as the rest of us what makes his guess better than say Tomasello's or Everett's?

      3. I am not aware of anyone who suggests 'going platonic' will help when we try to explain human cognition. Postal and Katz have explained at nauseum that this is not their project - so is there anyone who claims otherwise?

      Now what SHOULD interest anyone who wants to do biolinguistics is what kind of 'object' language is: is it [1] like 'having a pain' as Norbert has claimed in his "Going Postal" blog or is it [2] a formal object as suggested here and elsewhere:

      "the grammar of a language generates an infinite set of ‘structural descriptions,’ each structural description being an abstract object of some sort that determines a particular sound, a particular meaning, and whatever formal properties and configurations serve to mediate the relation between sound and meaning". (Chomsky 1972)

      More modern formulations talk of "generative grammar as being based on an operation of Merge that forms sets" (Chomsky 2012)

      If you think language is [1] Platonism is utterly irrelevant. But I cannot imagine that many people seriously hold [1] because it would mean by necessity that language is finite, that no language could literally have unbounded recursion, discrete infinity and other allegedly essential features. In that case you cannot abstract from brain properties such as finiteness if you model language but you have to built them right into your model.

      If you think language is like [2] then you can think about what would be the best model. 'Going Platonic' is one option here but if you think you can do better with a different model that is entirely up to you. Of course if you do BIOlinguistics at one point you have to worry if your model can be implemented in a human brain. In Language of Mind Chomsky admits he has no clue how his model could be "translated into some terms that are neurologically realizable". Given that he has spent 60 years on trying to figure it out and is probably one of the smartest people who have tried that would worry me...

      4. "Part of the ingenuity of Chomsky's refusal to construe his view in terms of representations with representational content is that he entirely side-steps (or at least, tries to side-step) the problem of intentionality "

      To me side-stepping the problem of intentionality is putting what is really interesting about language back into a black box from with 'the mental' allegedly was rescued when Chomsky overthrew behaviourism. How can you hope to account even in principle for creativity if you sidestep these issues? So I certainly agree with you that the incoherence issue is not the only problem of Chomsky's view. But apparently it IS an issue that worries some people - I cannot imagine Watumull would have written such a lengthy paper if he'd thought incoherence is nothing to worry about

      Delete
    15. "1. If I wrote the program to calculate the F numbers and implemented it in a computer and the computer performs as expected why would I now reject the explanation i gave above and worry about Kripkensteinian puzzles? BTW, for replies to Kripke & Wittgenstein style rule-following puzzles i recommend Katz - he has addressed them in detail and at least to me his arguments are convincing."
      Because you did not _expect_ the program to break down for some specific number, but given that any physical implementation will do, how can we say that it actually implements this program (rather than one that just refuses to give an answer once the input exceeds some number)? Why would you reject the explanation that the thing really _is_ a program if you're fine with the breakdown when it comes to the thing being a representation of something abstract. What's the point of the abstract thing to begin with. (Do you have a reference for the Katz-reply to Kripke, by the way?)

      "To me side-stepping the problem of intentionality is putting what is really interesting about language back into a black box from with 'the mental' allegedly was rescued when Chomsky overthrew behaviourism. How can you hope to account even in principle for creativity if you sidestep these issues?"

      I'm not sure I understand what you mean with "creativity". Do you simply mean _productivity_? then I fail to understand as to why it couldn't be accounted for. If you actually mean "creativity" in the sense of "appropriate but not determined" (I don't remember the exact wording), I do not think that is something Chomsky has any illusions about providing a theory of. To the contrary, I always considered Chomsky's emphasis on the human ability to use language creatively (in this sense) to motivate a limitation of what is to be explained --- merely the ability to understand and produce sentences, not how it is actually put to use (as a behaviorist theory would necessarily try to do, as, well, it would only recognize behavior as real).

      "But apparently it IS an issue that worries some people - I cannot imagine Watumull would have written such a lengthy paper if he'd thought incoherence is nothing to worry about"
      Quite frankly, I think a major reason for somebody writing a lengthy reply is not so much genuine interest or worry but rather that fact that otherwise, Postal et al. will just keep going around taking the silence for admission of defeat.

      "Now what SHOULD interest anyone who wants to do biolinguistics is what kind of 'object' language is: is it [1] like 'having a pain' as Norbert has claimed in his "Going Postal" blog or is it [2] a formal object as suggested here and elsewhere:"

      Yes, I agree, we should be explicit about what we take a language to be. As for the quote you adduce for [2], without checking I'm confident that Chomsky would resort to his "systematic ambiguity" where "generative grammar" can both refer to the object of study and our model thereof. Incidentally, you leave out the most plausible option (what he heck is [1] even supposed to mean? language is like 'having a pain'?!), namely [3] that language is whatever underlies the human ability to produce and understand sentences. Under a Chomskyan view, language along the lines of what Postal (and apparently you and at least half of contemporary philosophers of language) thinks "language" should refer to is not an interesting object of study, and it doesn't matter whether they exist as abstracta or not, they have no part to play in the theory; for the theory, all there ultimately exists are speakers.

      Delete
    16. "You are right that currently we do not know much about how the brain works. So what makes you think then that Chomsky's proposal is on the right track? If he has novel insights about brain properties one would assume he would share them with us? If he is just as much in the dark as the rest of us what makes his guess better than say Tomasello's or Everett's?"
      and
      "In Language of Mind Chomsky admits he has no clue how his model could be "translated into some terms that are neurologically realizable"."

      So much the worse for current neuro-science, is what I expect Chomsky would reply when pressed hard. And rightly so, why suppose that current neuro-science offers the proper reductive basis for cognitive theories. If that were the case, basically all of cognitive science - including Tomasello and Everett - would be in trouble as, to repeat, we don't have the slightest idea how to translate any talk about mental representations into "neurologically realizable terms". In fact, Tomasello and Everett are commited to future science delivering much more than Chomsky is, namely a proper theory of social cognition (if I understand their proposals correctly), something I doubt anybody is very optimistic about.

      Delete
    17. i will reply to most of your post later but want to express genuine puzzlement regarding:

      "Quite frankly, I think a major reason for somebody writing a lengthy reply is not so much genuine interest or worry but rather that fact that otherwise, Postal et al. will just keep going around taking the silence for admission of defeat."

      The only reason for making such an astounding comment I can imagine is that you never have bothered to read WHOSE silence Postal interprets as sign of inability to come up with a rebuttal. So here is the critical passage and a link to the paper:

      "It is odd for my opposite in the present exchange to be anyone other than Chomsky. For a reader might ask this. As the target of a vitriolic and shocking accusation like that addressed against Chomsky in Postal (2004: chap. 11), one appearing in a refereed volume published by one of the most prestigious univer- sity presses in the world, would they not seek to vigorously refute the charge? But in the intervening five years, Chomsky has, to my knowledge, not even mentioned the criticism; ditto for the uncompromising criticism in Levine & Postal (2004). By exercising his undeniable right of silence here, Chomsky leaves unimpeded the inference that he has not attempted a refutation because he cannot; see fn. 9 below for the substanceless alternative which Chomsky has adopted. In the present discussion, Collins has then chosen to defend something its own author is unwilling to". [Postal, 2009] http://ling.auf.net/lingbuzz/001608

      Given that Postal holds that an attempted reply by John Collins "leaves unimpeded the inference that [Chomsky] has not attempted a refutation because he cannot" what would make you think a paper by Watumull would change that state of affairs? Especially given that the latter has made clear that he does not speak on behalf of Chomsky but on behalf of himself. He has of course every right to do that if he feels his own position [which he points out differs in important aspects from Chomsky's] should be defended against Postal's incoherence claim. But given that Postal has never named Watumull as someone holding an incoherent position such defence is entirely proactive and has nothing to do with Chomsky's silence.

      Furthermore, the incoherence accusation was only one of several against which Postal claims Chomsky has no defence. So if, as you stipulate Watumull's reason for replying to the incoherence accusation was not "genuine interest or worry", then why would he not have picked any one of the other accusations made by Postal? For example the claim that "much of the persuasive force of Chomsky’s linguistics has been achieved only via a mixture of intellectual and scholarly corruption" [ibid, also see Postal 2004 and Levine&Postal: http://ling.auf.net/lingbuzz/001634 To me this seems to be a claim in much more need of addressing than a claim about ontology.

      Regardless of why Watumull decided to pick the incoherence issue, his paper is no reply by Chomsky and will not prevent that "Postal et al. will just keep going around taking CHOMSKY'S silence for admission of CHOMSKY'S defeat" - if this is what Postal has in mind.

      Delete
    18. Let me address another of your points:

      "In Language of Mind Chomsky admits he has no clue how his model could be "translated into some terms that are neurologically realizable"."

      So much the worse for current neuro-science, is what I expect Chomsky would reply when pressed hard.

      Let me see if I get this right: Chomsky admits that HIS OWN framework is not based on anything that is neurologically realizable and this is somehow fault of 'current neuro-science? How so? Chomsky is the formost Biolinguistic Scientist, he has done 40+ years of research, consistently making progress. So why is it someone else's fault that he does not know how to provide a model that is neurologically realizable for his own framework? What am I missing here? Has Chomsky made no progress? Is he not a scientists? Why does he insist on basing his own work on set theory when he knows [1] that we do not have sets in our heads and that [2] his own framework forces him to "accept things that we know don’t make any sense" [Chomsky, 2012]. Why is he interested in work on nematodes when important brain science needs to be done? These are HIS choices - so what justification could he possibly have to turn around and blame his failure to provide a framework that is based on things that make sense on 'current neuro science'?

      If that would not have been enough astonishment you also claim; "In fact, Tomasello and Everett are commited to future science delivering much more than Chomsky is, namely a proper theory of social cognition (if I understand their proposals correctly), something I doubt anybody is very optimistic about"

      I have by now some doubts that you do understand their proposals, but be that as it may, why would they needs future science to deliver more than Chomsky does? This only seems to follow if you buy into the Fodorian modularity of mind idea. But there is good reason to believe that the mind doesn't quite work as Fodor thinks. Besides it is only on Chomsky's view that the entire lexicon is innate as well. At one point he has to reveal where in the brain that is located. Tomasellian kids LEARN language. And if as much of language is 'outside of the head' as Everett believes, then he relies even less on future neuroscience. It might be a good idea that you familiarize yourself a bit with views you seem to know only second hand from what Chomsky says about them and THEN you can go on and criticize them.

      Talking of reading: read Chomsky's Cartesian Linguistics, 2009 edition, in which McGilvray informs us about the great progress that has been made towards accounting for linguistic creativity. The arguments by Katz are in his 2004 book SENSE, REFERENCE, AND PHILOSOPHY. There is really no point in engaging with this clever phil-o-mind stuff you seem to find amusing - once you have read Katz you will understand that these are non-issues.

      Delete
    19. ""Postal et al. will just keep going around taking CHOMSKY'S silence for admission of CHOMSKY'S defeat" - if this is what Postal has in mind."

      Why is that scientifically even the least bit interesting, though? If that is indeed what Postal has in mind, I doubt that he is interested in attacking ideas rather than Chomsky himself which he can feel free to do but I don't care about Postal's personal dislike for anyone.

      "why would he not have picked any one of the other accusations made by Postal? For example the claim that "much of the persuasive force of Chomsky’s linguistics has been achieved only via a mixture of intellectual and scholarly corruption"".
      I guess because he picked one that at least looks like being about a scientific issue rather than persons and their (alleged) character traits.


      "Chomsky admits that HIS OWN framework is not based on anything that is neurologically realizable and this is somehow fault of 'current neuro-science? How so? Chomsky is the formost Biolinguistic Scientist, he has done 40+ years of research, consistently making progress. So why is it someone else's fault that he does not know how to provide a model that is neurologically realizable for his own framework? What am I missing here?"

      What you are missing (and apparently refusing to reply to) is that the same problem - having no idea of how to translate current theories into "terms that are neurologically realizable" - holds for other theories involving talk of mental representations. Chomsky is one of the few people I'm aware of who have the decency of pointing out that there is a huge gap between brain- and cognitive sciences at the moment. And the other point your missing is that one of the main problems with requirements for some special science being reducable to what we consider more fundamental at the moment is that we have good reason to believe that our current version of the more fundamental one is incomplete and might well lack what would be required to perform a proper reduction. If you're so forthcoming with reading suggestions, may I point you to Chomsky's 2009 JoP article?
      Are you willing to bite the bullet and follow the Churchlands down the road of eliminative materialism because we know that there being intentional states in peoples' heads doesn't make any sense given our current knowledge of the brain? Or...

      "There is really no point in engaging with this clever phil-o-mind stuff you seem to find amusing - once you have read Katz you will understand that these are non-issues."
      ...has Katz solved all these problems? That'd be great news, really, if I ever find the time I'll check out his book. But I'd be surprised if virtually everyone working on "this clever phil-o-mind stuff" has just overlooked or chosen to ignore his work.

      "Tomasellian kids LEARN language. And if as much of language is 'outside of the head' as Everett believes, then he relies even less on future neuroscience."
      And how do they learn language? Whatever proposal is being made along those lines by Tomasello or Everett, these proposals will at some point also have to be translated into "terms that are neurologically realizable", or is that an unfair requirement? And if these theories involve additional cognitive abilities, well then, the theories about these abilities will have to be translated into "terms that are neurologically realizable", no?

      Delete
    20. "This only seems to follow if you buy into the Fodorian modularity of mind idea. But there is good reason to believe that the mind doesn't quite work as Fodor thinks. Besides it is only on Chomsky's view that the entire lexicon is innate as well. At one point he has to reveal where in the brain that is located."
      For me, the strongest argument for a (modest, not the radical evo-psych kind of) modularity of mind is pragmatic --- as long as there are somewhat insulated cognitive abilities there is some hope of getting insightful theories about these. Yes, it may turn out that there is no such thing as cognitive modules, and that in fact it's "general intelligence" but if that's true that's it for insightful theories (unless, of course, you are optimistic that we'll get a theory of general intelligence at some point which you are free to be; I'm not, and I take it I'm not alone with this skepticism).

      "Talking of reading: read Chomsky's Cartesian Linguistics, 2009 edition, in which McGilvray informs us about the great progress that has been made towards accounting for linguistic creativity."

      Can you give me a more detailed pointer as to what you are referring to? From skimming the foreword, I have no idea what you're talking about; there is a stark difference between "accounting for linguistic creativity" and "making some sense of how that readily observable kind of creativity is possible" (p.41), even when "[a]pparently, our sciences cannot deal with creativity and [are] inadequate to deal with human action and behavior in that very wide range of cases in which sententially expressed concepts play a constitutive role." (ibid.), don't you think?
      I may simply have misread all of my Chomsky or I may just misremember (it's been a while), but I for me he always was very explicit about the problem of accounting for (as in, coming up with a scientific theory that explains) "creativity" (is that what he called "Descartes' Problem?"; not that labels matter) not being one Generative Linguistics has any illusions about solving.

      Delete
    21. "Postal et al. will just keep going around taking CHOMSKY'S silence for admission of CHOMSKY'S defeat" - if this is what Postal has in mind."
      Why is that scientifically even the least bit interesting, though? If that is indeed what Postal has in mind, I doubt that he is interested in attacking ideas rather than Chomsky himself which he can feel free to do but I don't care about Postal's personal dislike for anyone.


      Postal may not like Chomsky as a person, but what he is attacking are clearly Chomsky’s ideas. These ideas that have influenced a great number of people and continue to do SO though in spite of being fundamentally flawed. If basing your scientific career on an incoherent framework does not matter to you, that is your choice. But for many people who wish to make a genuine contribution to our collective knowledge, it would be important to work on something that has NOT been shown to be incoherent. Do you think that people who work in a Chomskyan framework because it is based on CHOMSKY’s ideas will care one iota if Postal refutes Watamull’s arguments? If Chomsky were to react to such a refutation at all, he would likely do what he has done for decades, namely, claim that Watamull just did not understand him properly. Norbert has repeatedly reminded us that many people do not understand Chomsky – how can YOU know that Watamull understands Chomsky? Now quite contrary to what you seem to believe,
      Postal cares a great deal about linguistics as a field, and knows first hand how powerful the influence of Chomsky’s ideas is on young linguists. I imagine that it is for that reason that he has stressed the striking fact of Chomsky's failure over decades to respond to the types of criticisms at issue.

      "why would he not have picked any one of the other accusations made by Postal? For example the claim that "much of the persuasive force of Chomsky’s linguistics has been achieved only via a mixture of intellectual and scholarly corruption"".
      I guess because he picked one that at least looks like being about a scientific issue rather than persons and their (alleged) character traits.

      Why would he pick ANY if he thinks Postal is on some kind of irrational vendetta? Judging by your comments I wonder if you have read any of the specific points of Postal’s criticism before you drew conclusions about their merit? If not, the brief summary of the fundamental problem with Chomsky’s so called science is easily accessible here: http://ling.auf.net/lingbuzz/001686 This “article studies at some length NC's proposals across the decades from 1962-2002 about two putative principles of NL he has introduced. Detailed listing and analysis of remark after remark of NC’s about the supposed principles highlights the a priori unbelievably debased quality of the proposals and claims at issue and should leave no rational doubt about their thorough instantiation of the concept ‘play acting at linguistics’”. If you disagree with Postal and think either [i] the two alleged principles were genuine contributions to science or [ii] they were not but Chomsky simply was not smart enough to know that, then maybe you should defend Chomsky against a charge of scientific misconduct that would in many other fields arguably result in him losing his academic privileges.

      You write: “I may simply have misread all of my Chomsky or I may just misremember (it's been a while)”

      If you do not remember what Chomsky wrote, and presumably never have read the bold and far-reaching promises he made in the 1950s/60s regarding solving the problem of creativity, how can we debate that issue? Similarly, you present the Kripkensteinian pseudo problem, I suggest a specific source where it has been addressed and you tell me you have no time to read the solution [which is very different from disagreeing with the solution and giving reasons for your disagreement]. I am not sure it makes sense to continue a debate if you refuse to at least look at proposed solutions.

      Delete
    22. "If you do not remember what Chomsky wrote, and presumably never have read the bold and far-reaching promises he made in the 1950s/60s regarding solving the problem of creativity, how can we debate that issue?"

      I'd like to learn where Chomsky promises us the solution to the problem of creativity. But even if he had promised it in the early 60s, say, I think there's ample textual evidence that this no longer is on the agenda.

      "Similarly, you present the Kripkensteinian pseudo problem, I suggest a specific source where it has been addressed and you tell me you have no time to read the solution [which is very different from disagreeing with the solution and giving reasons for your disagreement]. I am not sure it makes sense to continue a debate if you refuse to at least look at proposed solutions."

      It's not so much about the "Kripkensteinian pseudo problem" as about the general lack of promising approaches to translating talk about mental representations into talk about brains, and that this ought to be considered a problem for most of cognitive science as well if you think bio-linguists ought to worry about there not being specific proposals in this direction, a point that you have been consistently ignoring (which is different from disagreeing with the point and giving reasons for your disagreement).
      But I think we can agree that this debate is getting pointless, so I'll just call it quits here.

      Delete
    23. minor erratum:

      "lack of promising approaches to translating talk about mental representations into talk about brains, and that this ought to be considered a problem for most of cognitive science as well, if you think bio-linguists ought to worry about there not being specific proposals in this direction, a point that you have been consistently ignoring (which is different from disagreeing with the point and giving reasons for your disagreement)"

      I had asked you based on what you prefer Chomsky's framework. Turning around and answering everyone has problems re translating talk about mental representations into talk about brains is not an answer to this question. You still have not given any. I happen to agree that at the moment no one has a satisfactory account - so why would I argue with you about that? I have given no reasons for disagreement because there is no disagreement.

      Delete
    24. I hope my continuing despite having called it quits isn't already taken as the basis for a potential argument to undermine my credibility...

      "I had asked you based on what you prefer Chomsky's framework."
      Then probably I didn't make this clear enough. The major attraction for me is pragmatic, so here goes in a nutshell: If there is any theory to be had about human cognition, it's not going to be a theory that accounts for "everything" (and establishing this - to me at least - always was the main reason for bringing up "creativity"). And if it's going to be an insightful theory about some smaller aspect, say, our ability to produce and understand sentences (and our ability to acquire this ability), we'd better not use too many notions that themselves stand in the need for explanation themselves, say, "theory of mind" or "rich social cognition" or "metaphorical extension", as the currently major alternative paradigm I'm aware of (usage-based approaches I think it's called) suggests.

      I prefer a theory that tries to do without recourse to these additional notions, and as I haven't seen strong evidence in favor of this being an utter failure, I still have my hopes up that this will get us somewhere. Whereas I'm very skeptical of there ever being insightful theories of, say, our "general social cognition", but yes, that's just my pragmatic reasons for preferring a "Chomsykan" (note the scare quotes; I'm no adherent of Minimalism but would probably qualify as a bio-linguist) approach.

      (Additional question: I dug up Katz 2004, and I don't find any discussion of Kripke (1982) (although it's in the bibliography, it doesn't seem to be cited at least once in the main text), and his discussion of Quinean indeterminancy (which you also might have had in min?) didn't strike me as providing a way out if one shares "Quine's empirical assumptions". And I didn't find anything about attempts to naturalize intentionality...
      So in case I'm simply overlooking the relevant part, could you give me page numbers?)

      Delete
    25. Thank you for the clarification. Let me add a couple of clarifications as well

      1. You misunderstood my reference to Katz. It came up as response to a very specific point: having a computer [ex hypothesis we know its inner workings] calculating the F-numbers based on a program that was also known to us. You brought up Kripkensteinian worries when i said the computer does not run the program itself but a physical token. - only for this VERY LIMITED case did i claim Katz had a solution [and it might very well be he had it in more detail in one of his earlier books]. I would never claim Katz attempted to naturalize intentionality.

      2. i appreciate your pragmatic reasons. However, I would be very cautious about an implicit premise you seem to assume as unproblematic. You say you want an "insightful theory about some smaller aspect, say, our ability to produce and understand sentences (and our ability to acquire this ability)" that does not require to account for broader aspects of social cognition you worry [probably rightly] might be difficult if not impossibly accounted for, Now in order for this approach to work it must be [at least in principle] possible to isolate 'language' from 'general social cognition'. From my perspective the only way such isolation could work is if Postal is right and language is a Platonic object. Then we would have some reason to believe that whatever it is that allows us to acquire and use language is sufficiently distinct from 'social cognition' so that we can study it in isolation.

      But if you are not a Platonist about language and assume it is a biological organ [in whatever sense you want to use the term] i see no good reason for assuming that we CAN study language in isolation. Where would you draw a line between 'language' and 'social cognition'? Do you believe it is possible to give a full account of language acquisition that explains NOTHING about linguistic creativity? In other words do you think a child could acquire and hence understand English but not be able to use English 'creatively'? It should be possible if we can account for one without accounting for the other.

      To repeat: i agree at the moment no one has an account for translating talk about mental representations into talk about brains. i also doubt that the kind of 'eliminativism' of Churchland works. But i am worried about how little emphasis the biolinguists put on biology.

      Delete
  4. It seems to me that the question in Par 2 above is addressed by something Norbert said some time ago, which is that the abstract object that is the solution to the problem (such as best parse for an utterance in a context) is a *useful description* of the brain state. Not, of course, a complete one, but useful nevertheless.

    ReplyDelete
    Replies
    1. @Avery: note that my discussion with Alex is about computers while the comment Norbert made referred to I-language. I do not think for computers anyone denies that there is a difference between the program [software] and the computer [hardware]. And we [well not i personally but people who are experts] know what is 'inside' the black box of a computer - so they can evaluate what kind of abstracting away is legitimate for what purpose. In the language case we know virtually nothing about how language is implemented in the brain - so it is much more problematic to 'abstract away' Recall Chomsky's critique of connectionsists:

      "...if you take a look at the core things they’re [=connectionists, CB] looking at, like connections between neurons, they’re far more complex. They’re abstracting radically from the physical reality, and who knows if the abstractions are going in the right direction? (Chomsky, 2012, p. 67)

      But most importantly, and that this the essence of Postal's criticism of Chomsky's ontology, for I-language the claim is that there is no difference between language and knowledge of language [recall Norbert's example of being in pain - so just like having a pain is being in a certain brain-state, having an I-language just IS being in a certain brain state]. Now if all that is being claimed is that talk about I-language is a "useful description" of a brain state, then we have a very watered down biolinguistic enterprise. It is not Chomsky's biolinguistics and from what I understand not Norbert's either. So in the current context, you mean "useful" for what purpose? And will it be eventually possible to get from this "useful" description to one that is "complete" without encountering the ontological problems Postal raises?

      This is BTW not just a question for you. I notice that if you read Watumull's paper carefully there is a constant sliding back and forth between biological and 'platonic properties' of language. So he seems to be conflating in the same way as Chomsky does. This raises the question: are ALL properties of language biological properties or not?

      Delete
  5. An interesting paper.

    Hope that everyone following the discussion under the last couple of posts here can understand why Chomsky has not responded to Postal. If he had he’d have been buried in a perpetual exchange of parallel ideas at best.

    ReplyDelete
    Replies
    1. I am afraid I do not follow your logic at all. Are you saying Chomsky would have been UNABLE to demonstrate that Postal is wrong? In that case what you say makes maybe some sense because Chomsky would have to resort to 'parallel arguments'. But if you assume Chomsky has a convincing argument, then a refutation of Katz&Postal some 30+ years ago would have PREVENTED any exchange like the current one, not to mention a pile of publications because reasonable people like Katz and Postal would have accepted an argument that demonstrates they are wrong and moved on with life [and likely everyone else would have done the same].

      Delete
    2. I'm sure you know what I meant. Anyway, to make it quite clear, here are a few points:

      (1) In no way I suggest that, say, Pullum and Scholz 2006 is a bad theory. It’s their choice how to do linguistics. And it’s interesting as an alternative.

      (2) What is quite wrong is Postal and Katz’s idea that cognitive linguistics was just a virtue of necessity, a mere tool to get rid of the (allegedly) unbearable American structuralism and a bridge to the correct, ie. Platonist (or “realists’”) linguistics. For some kind of research (particularly applied one) it doesn’t much matter whether one is structuralist, Platonist, generativist, constructionist or otherwise, however, in other cases, particularly in basic research, it does.

      (3) Those who claim that there is a difference between language and the knowledge of language, who compare language to logic and mathematics while at the same time admitting that the study of language is based on the intuitions of “competent speakers”, should try to answer the question, who are the competent speakers. How to identify them, only them and all of them? It’s clear that the logic of their approach precludes such methods of inquiry as national referendum or an individual speaker’s intuition. But let’s follow that logic further. Mathematics and logic are (at least in retrospective) based on the agreement of (the most) competent mathematicians and logicians, respectively. These disciplines are prescriptive – no one can successfully negotiate about whether 2 + 2 is 4 or 7 or whether function f(z) = 1/z is analytic in the neighbourhood of z = 0. Language norms can and are being negotiated between speakers-non-linguists. But If language is supposed to be the same kind of animal as math and logic then it can’t depend on laypeople agreement, it has to be what (the most) competent linguists agree it is - which implies an extreme form of prescriptivism. This is the paradox of Platonism for it doesn’t work that way, it does respect the non-linguist speakers’ intuitions.

      (4) The cutting edge of linguistic research is internalist rather then externalist, it’s approaching the brain rather than the eternal Empire of Ideas. Linguistics is on its way from humanities to the (natural) science not the other way round. As a model case, I mention the Lassiter’s 2008 paper and the response to it by Lohndal and Narita. The latter authors were quite right, however, they presented their truths in an unfortunate way. I appreciate Lassiter’s attempt to bring something new into the muddy waters of humanities but at the same time I have to only agree with Lohndal and Narita that it’s far from what can be called science. It doesn’t mean it’s bad, many people do that and like that, that’s their choice and it’s appreciated by some, too. It’s not a matter of good versus wrong, it’s a matter of taste, of preferences, but it’s not science. On the contrary, Katz & Scholz is science but not at the cutting edge. Howgh.

      Sorry for answering so late, I've been and still am occupied with a non-scientific enterprise in Czech linguistics ;)

      Delete
  6. "Now what SHOULD interest anyone who wants to do biolinguistics is what kind of 'object' language is: is it [1] like 'having a pain' as Norbert has claimed in his "Going Postal" blog or is it [2] a formal object as suggested here and elsewhere"

    Of interest to the philosopher perhaps, but for the scientist, this is not an either-or:
    having language is like [1] having a pain -- phenomena in the natural world.
    One then could proceed to study these phenomena, perhaps making some idealizations, further observations, possibly construct models, as is done throughout science.
    The scientist does not presume the conclusion ([2] language must be a formal object), but creates (often formal) models to account for and perhaps explain the phenomenon.
    (Mathematicians, on the other hand, often begin with a notion of some kind of formal objects, and then explore their properties, relation to other kinds, etc.)


    "Part of the ingenuity of Chomsky's refusal to construe his view in terms of representations with representational content
    is that he entirely side-steps (or at least, tries to side-step) the problem of intentionality
    (something that becomes very clear in his discussion(s) with Georges Rey)."

    I think this is exactly right, not for the reason of "side-stepping the problem of intentionality", but to avoid infinite regress in explanations of cognitive capacities like in the old joke:
    Vision is like a tv screen in the mind...
    But who is watching the tv?

    ReplyDelete
  7. You say: "Of interest to the philosopher perhaps, but for the scientist, this is not an either-or:
    having language is like [1] having a pain -- phenomena in the natural world.
    One then could proceed to study these phenomena, perhaps making some idealizations, further observations, possibly construct models, as is done throughout science".

    I certainly agree that this is what a scientist like Chomsky OUGHT to do. You hardly can blame me, the philosopher, that the 'model' Chomsky offers in SCIENCE of Language [which allegedly was the 2012 state of the art] amounts to saying that the operation Merge takes objects and combines them into new objects.

    Chomsky further says his work "assumes set theory – I would think that in a biolinguistic framework you have to explain what that means" But he offers no explanation there or elsewhere how sets can be biologically implemented.

    Next he says: "We don’t have sets in our heads. So you have to know that when we develop a theory about our thinking, about our computation, internal processing and so on in terms of sets, that it’s going to have to be translated into some terms that are neurologically realizable"
    again NO theory or model is offered

    He admits that so far his talk about Merge and sets is "something metaphorical, and the metaphor has to be spelled out someday" (Chomsky, 2012, p. 91

    So if you can enlighten me where Chomsky's scientific model of the language faculty in the human brai is hidden I'd be much obliged.

    Next you say: "The scientist does not presume the conclusion ([2] language must be a formal object), but creates (often formal) models to account for and perhaps explain the phenomenon."

    Again we agree. But seemingly you missed that I have cited the scientist [Chomsky], not said what i think language is. I have been asking the biolinguists here for months now WHAT specific brain models have been developed but apparently there are none. Based on what can we then talk about brain science?

    Now there has certainly been progress in LINGUISTIC work of the kind David Pesetsky mentioned. But as far as I can tell that work has never been related to any specific brain properties. So the formal models explain the kind of properties a Platonist like Postal also attempts to account for using a different model. So what the BIOLOGICAL component of the biolinguistic models?

    ReplyDelete
    Replies
    1. CB quoting Chomsky: "talk about Merge and sets is 'something metaphorical, and the metaphor has to be spelled out someday'"

      It seems to me this is standard operating procedure in any science, Chomsky is simply being more honest about it than some.

      How is formulating Merge and its set-theoretic properties as (part of) a model of the human language faculty any different than physicists positing String theory (more precisely M-theory) as (part of) a model of the universe?

      Both String theory and Merge are metaphors, idealized, abstract models.
      No one has seen strings: 1-dimensional vibrating objects embedded in an 11-dimensional space to explain all of physics seems pretty far-fetched; must such work be dismissed for not being PHYSICAL enough?
      What does it mean that Merge isn't BIOLOGICAL enough?
      Whether String theory is physical enough, whether Merge is biological enough, are empirical matters: do they contribute to understanding the data, to good explanations, further work, etc.
      Do platonists see these cases as different?

      "So if you can enlighten me where Chomsky's scientific model of the language faculty in the human brai[n] is hidden I'd be much obliged."

      I think you are making a category mistake here; a scientific model is not "in the human brain", anymore than String theory is "in the universe".

      We're rehashing the "psychological reality" debates in linguistics back in the 70s, 80s, with nothing new.
      I suggest critiquing Rules and Representations, rather than the interview book, if you are really interested in this.

      Sidebar for any Platonists:
      Do strings exist as abstract objects?
      Would strings still "exist" if String Theory is abandoned?
      If languages exist as abstract objects, does not Merge with its properties "exist" also, whether or not shown to be not a useful construct for explaining the language faculty? (Similarly ALL grammatical theories ever proposed)

      Delete
    2. "So if you can enlighten me where Chomsky's scientific model of the language faculty in the human brai[n] is hidden I'd be much obliged."

      I think you are making a category mistake here; a scientific model is not "in the human brain", anymore than String theory is "in the universe".

      That was not a category but a grammar mistake. [You may have noticed that I asked Alex for clarification when he made a similar mistake and did not assume he really meant something stupid.] What I meant to ask is where the language faculty, stipulated by Chomsky's model, is located. The LF cannot be just Merge and as far as I know Chomsky has from early on hypothesized that we need to “formulate a hypothesis about innate structure that is rich enough to meet the condition of empirical adequacy” (Chomsky, 1967, p3). After 45 years of biolinguistic research surely we know something more about the innate structure?

      You say: "We're rehashing the "psychological reality" debates in linguistics back in the 70s, 80s, with nothing new." There are really only 2 possibilities: either [i] LF is a biological 'object' part of the human brain or [ii] it is not. Since Chomsky has repeated [i] over and over again and since he also has rejected [ii] there is no debate. If you assume [ii], then maybe you are some kind of Platonist?

      As for why some physicists reject string theory I suggest you ask them. It is not clear to me why you seem to think that a LINGUISTIC Platonist like Postal should have a theory about strings. Do you expect your dentist to know how to fix your car?

      Delete
    3. "After 45 years of biolinguistic research surely we know something more about the innate structure?"

      Surely we do; do you need pointers to that literature?

      "You say: "We're rehashing the "psychological reality" debates in linguistics back in the 70s, 80s, with nothing new." There are really only 2 possibilities: either [i] LF is a biological 'object' part of the human brain or [ii] it is not. Since Chomsky has repeated [i] over and over again and since he also has rejected [ii] there is no debate. If you assume [ii], then maybe you are some kind of Platonist?"

      I think [i] is a good working hypothesis for scientists. Like all hypotheses, subject to verification, revision, refinement, etc., but not incoherent as stated.

      "As for why some physicists reject string theory I suggest you ask them. It is not clear to me why you seem to think that a LINGUISTIC Platonist like Postal should have a theory about strings. Do you expect your dentist to know how to fix your car?"

      What Paul Postal thinks is not really the issue here.

      The issue is that the platonist arguments for language as a formal, abstract object work just as well (poorly) for all sorts of other things, such as: vibrating strings in 11-dimensional spaces, every grammatical formalism ever conceived of, maybe even unicorns.

      Perhaps that is what platonists intend: that the constructs posited by discredited theories are just as real as those of currently held theories.

      If so, I don't understand how platonists square these criteria for realism, with the scientific method as usually carried out. The same problem does not arise in the biolinguistics programme. I mention this line of thinking, ONLY because the platonist argument seems to be: biolinguistics is so incoherent the only reasonable alternative is platonism: language as formal, abstract object.

      Delete
    4. If you like to chase platonic unicorns that is of course entirely up to you but please do not expect i join you. The only sentence of your post that merits a reply is:

      I mention this line of thinking, ONLY because the platonist argument seems to be: biolinguistics is so incoherent the only reasonable alternative is platonism: language as formal, abstract object.

      It is extremely puzzling to me why you would think this. Postal holds Chomsky’s ontology to be incoherent because Chomsky assumes language to be both: (i) part of the human brain (that is a finite, physical object) and (ii) based on set-theoretic objects generated by the operation Merge (e.g., “a system of discrete infinity consisting of hierarchically organized objects” (Chomsky, 2008, p. 137)). Because (i) and (ii) cannot apply simultaneously to the same object (e.g. an I-language) Chomsky’s ontology is internally incoherent. This incoherence arises entirely independently of whether a critic defends linguistic Platonism as Postal does or linguistic naturalism as I do. In case it still is unclear read: http://ling.auf.net/lingbuzz/001573 where this point is explained in more detail.

      Delete
    5. Alas, I am unable to see that incoherence. Making of mathematical models, as in (ii), is routine practice in the natural sciences. Your supposed ontological incoherence would apply not only to biolinguistics but would demolish virtually all of physics -- to any use of infinitary mathematics (real numbers, Lie groups, fractals, ...) in the physical world. I doubt that is the intent.

      Delete
    6. I have always thought that the solution to incoherence issue was to treat the platonic objects as a classification scheme for the real-world things we're studying; the sentence structures are classifications for linguistic performances, the grammars for the brain states (or states of whatever alternative organ one might wish to suggest) that are responsible for these performances. Then there is a platonic subject associated with linguistics, it's mathematical linguistics, as practiced by Eric Stabler, Alex Clarke and many others (but it's not what I and other descriptive/theoretical linguists do).

      Another contributor to the incoherence issue is Chomsky's systematic unwillingness or inability to distinguish between i-language as a description of the brain-state responsible for certain aspects of linguistic performance (the ones that he, me and Norbert are primarily but not exclusively interested in), in which case it can in a sense be a 'part of the brain' (more precisely, a description of an aspect of the brain, which may or may not correspond to a 'part' in the usual understanding of the term), and i-language as the infinitary set produced by this state, in which case it can't be part of the brain in any sense whatsoever (but could be described as an aspect of brain function).

      I hope that this or some similar formulation offers a way out of this mess, because I think it is completely unrealistic to expect Chomsky to say anything much different or clearer than what he's already said, as desireable as this might seem to be. (If I've forgotten about or never noticed some important clarification by Chomksy of what i-language is supposed to be, I'm sure somebody will point this out.)

      Delete
    7. "Eric Stabler" --- am I right to assume you mean Ed(ward) Stabler? (smile)

      Delete
    8. Avery writes: "there is a platonic subject associated with linguistics, it's mathematical linguistics, as practiced by Eric Stabler, Alex Clarke and many others (but it's not what I and other descriptive/theoretical linguists do)"

      What is the real distinction here between mathematical linguistics and descriptive/theoretical linguistics? Is it really a difference in what's being studied (a platonic thing versus a non-platonic thing)? I would have thought that Ed Stabler and Alex Clark take themselves to be studying linguistics as a cognitive phenomenon in more or less the way Chomsky introduced it, though they may disagree with Chomsky on many details. To me the differences are in the methods, not the goals or object of study. Otherwise, are we going to conclude that Chomsky's early work on the Chomsky hierarchy was the study of a platonic linguistics?

      Of course you can also use mathematical methods to study "platonic linguistics", just as you can use the "normal" tools of descriptive/theoretical linguistics to. But I think the platonic/internalist issue is just orthogonal to the mathematical/non-mathematical one.

      Also: "distinguish between i-language as a description of the brain-state responsible for certain aspects of linguistic performance ... and i-language as the infinitary set produced by this state"

      I would have thought that I-language is clearly intended to be the former. Is there something Chomsky has said that suggests taking I-language to be a set of expressions? That seems to be almost exactly what he intends I-language not to be (i.e. almost E-language).

      Delete
    9. I consider what I do to be theoretical linguistics which bears the same relationship to linguistis that theoretical physics does to physics: given that it is theoretical it has to be mathematical otherwise it descends into idle pseudo-scientific speculation. The fact that it is mathematical doesn't mean it is Platonic: look at every branch of functioning science, which is in general highly mathematicized.

      If you are a Platonist then you don't care about acquisition at all, whereas for me that is the central problem.

      (Parenthetical aside: Indeed one of the points of the Postal critic, and many others over the years (.e.g Devitt etc) is that for all of the rhetoric of Chomskyan linguistics, which reaches new heights in biolinguistics, they have insulated themselves from all of the nonlinguistic data, which does make it look like Platonism.)

      Delete
    10. oops yes eric->edward. I will suggest as pathetic excuse the existence of a nephew called 'eric' ...

      A typical 'linguistics' question would be how well/badly a grammar fits the apparent data of a language, or whether a framework/theory can give good accounts of languages with the observed range of typological variation; a typical 'mathematical linguistics' question would be whether some formalism is mildly context sensitive, or, if not, usefully less than full context-sensitive.

      As for i-language, one possible place to get confused in KLT:57 mid, where the child is said to come to 'know an i-language', which certainly invites interpretation of i-language is something other than a brain state (which would be more naturally described as `acquired', I think. Then KLT:249-250 'two grammars of the I-language abstracted from this state'. On KLT:50 we read about a shift to mental representation *and computation* as part of the shift from E- to I-language, but without an (putatively infinite) structure set in the picture, what is there to compute? And if there is such a set in the picture, what is its name? I think that's enough to demonstrate a potential for confusion. (KLT = Knowledge of Language, Chomsky 1986).

      Delete
    11. @Alex: I think I find mathematical Platonism less crazy than the other kinds, perhaps even plausible. At any rate, studying the mathematical properties of linguistic formalisms seems much closer to a Platonistic enterprise than what I do.

      Delete
    12. But, if the main aim is to understand language itself, with the mathematical investigations only a means to that end, that would be non-Platonistic. & even somebody like me is 'practicing Platonism', in accordance with what I'm saying, when they're trying to figure out if their analysis actually produces a given sentence with appropriate interpretation or not.

      Delete
    13. I have just posted a paper on LingBuzz http://ling.auf.net/lingbuzz/001765

      dealing in some more detail with the issues raised by Watumull. If you read it you will see why i think that he has "resolved" the incoherence indeed - by essentially eliminating all biology from biolinguistics. And of course much of the debate here on the blog also has focussed on computational issues NOT on biological wet-ware...

      Delete
  8. If the dispute here revolves around the possibility connecting grammatical phenomena to specific brain properties defined in terms of neurons with our present collection of ideas and techniques, there is probably an unbridgeable gap between people who think it doesn't matter that nobody has the slightest idea of how to do it yet (me, Ben, Norbert & Alex), and people who think it does (Christina, Paul P).

    I'm not sure that there's anything to be done about this, as long as the people in the first group are upfront about having no concrete proposal about how to link the two domains, which they appear to be. My view for a long time has been that linguists should not purport to explain anything, but only to delineate problems that the real explanations will have to deal with.

    ReplyDelete
  9. Hmm that last sentence wasn't very well put. Rather, from the point of people who study neurons, linguistics is providing the questions, not the answers. For people who are happy with Marr's levels, it is suggesting rather high-level sketch answers.

    ReplyDelete
    Replies
    1. Thanks Avery, for clarifying what you think the source of disagreement is. I am speaking only for myself here [though i hope Paul would agree]; NO for me this is not the issue. It is not about you [pl] being happy with an opaque box and me requesting a transparent box so to speak.

      The worry is that Chomsky makes an ontological category mistake, which cannot be fixed [for example by doing more brain research] but only eliminated. For Chomsky the essence of language is Merge, which is essentially a set union. A set union produces sets. But Chomsky also says language is biological, an I-language, [part of the brain]. So he is saying sets ARE biological. This is not a failure of neuro-science, or of any kind of science. The only way to address the problem is withdrawing one of the two incoherent assumptions, either that language is biological, or that it is set-theoretic.

      Delete
    2. It is standard in computer science to use sets (e.g. http://docs.oracle.com/javase/6/docs/api/java/util/Set.html).
      Is this an ontological category error?
      What about a computer program that uses numbers? Is that a category error?

      A human being can add and subtract numbers -- I just added three and three to make 6 --
      one could argue that this is a category error because numbers are abstract potentially infinite objects and the brain is a concrete and finite biological entity, but this seems to misunderstand the nature of abstract objects so completely that I can't take it seriously.

      Delete
    3. Alex, if something seems as utterly stupid as what you attribute to me it might be a good idea to check if you understand the point I made. I assume you have heard of the type token distinctions? What you just have added are TOKENS and no one denies that you can do that, or that my keyboard has tokens of numbers on it etc. etc.

      I know you guys in CS use the kind of shorthand you do and that is perfectly fine for your purposes. But we are talking about ontology here. So unless yo believe you literally have the integers themselves [as opposed to some physical tokens of them] in your head while you add them you have not shown that i terribly misunderstand the nature of abstract objects

      Delete
    4. So why do you object then to sets being tokened in human brains? That seems to resolve this supposed ontological problem, without any great violence to Chomsky's original intention.


      Delete
    5. I'm not worried about the 'category mistake' because I take it to be a way of saying that the brain does something that can be usefully modelled as the formation of a set. Perhaps better modelled as the formation of something other than a set (hence C's disquisition of us not having sets in our heads). Scare quotes because I'm not convinced that it's any more of a category mistake than claiming that I ate breakfast sitting on a chair, as opposed to on a region of mostly empty space with some electrically charged stuff in it that can be usefully described as a chair.

      Delete
    6. Much more serious, to me, are the problems that flow from the severely undersupported assumption that Merge is binary. That headed, binary structure play a significant role in syntax seems to be true, but this falls massively short of being a case that they are all that is in play there.

      Delete
  10. The question is not why I object but why apparently Chomsky objects. The K&P criticism has been around for decades now. If tokens is all Chomsky needs, don't you think he would have said so by now?

    Since this point seems to be missed again and again here is another link to the 2 min. version of what is at issue: http://ling.auf.net/lingbuzz/001573

    Why would Chomsky have written:

    "In the work that I’ve done since The Logical Structure of Linguistic Theory – which just assumes set theory – I would think that in a biolinguistic framework you have to explain what that means. We don’t have sets in our heads."

    If he does not need sets, who would care that we can't have sets in our heads? I assume the simple answer is because our brains would not have room for all the tokens he needs. You CS people seem to worry very little about physical limitations. That is probably a good thing in your line of work. But even you have encountered cases where you have written a program that, when implemented in a computer, 'made' the computer run out of memory. If that happens you usually have 2 options; add memory or write a different program.

    But when you talk about brains you canot do that. Assume we have a language faculty along the lines Chomsky proposes. That thing is working RIGHT NOW, using a very finite amount of space and resources. If your theory requires it to have three times the memory it does, you can't just add memory, you have to adjust your theory. So for starters: where do you suggest the linguistic tokens are stored? Chomsky is an extreme internalist: the entire lexicon is innate. Don't ask me how this works, ask him. But if everything is tokens these things have to be somewhere.

    Take something simple like numbers, It is probably fairly simple to have tokens for say 0,1,2,3,4,... but what are we doing about very large yet perfectly finite numbers? Again, if we have no types just tokens, how can we have say a token for the A [number of atoms in the universe] that is different from the token for A-1?. Is the token for A heavier than the token for 4? Would our brain have the room to store the tokens for A and A-1? If not do these numbers not exist? These are the kinds of issues Katz raised and it is my guess that a brilliant man like Chomsky realizes it is a serious problem. Maybe BECAUSE he knows that tokens is not the answer he continues to say things like "There are a lot of promissory notes there when you talk about a generative grammar as being based on an operation of Merge that forms sets"


    ReplyDelete
    Replies
    1. So *you* accept that the argument that merge operates on tokens of sets is a valid retort?

      I have no interest in arguing about what Chomsky or Postal might think of this argument; I am interested only in the issues themselves. I am also not interested in defending Chomsky against charges of failing to address criticisms. Rather I am interested in the validity of those criticisms.

      I care very much about the physical limitations -- I think though along with almost everyone else in the last 30 years that the best way of dealing with these physical limitations are through the tools of complexity analysis: namely asymptotic worst case bounds, perhaps modified by the FPT stuff, rather than by a simple fixed finite bound on the size of storage. So maybe there are arguments against using complexity analysis and in favour of a fixed bound, but I don't know of any -- do you?
      Indeed I care so much about these issues that I recently wrote a paper on it (with Shalom Lappin) called "Complexity in Language Acquisition" that you can find here (http://onlinelibrary.wiley.com/doi/10.1111/tops.12001/abstract) and a copy is available freely.

      So yes, I agree that the brain is finite and cannot store numbers beyond a certain size. But that is a different argument which I thought we had resolved -- namely what sorts of idealisations are appropriate or not. What we are currently discussing is the ontological incoherence that you allege, but which seems to be soluble by considering that the sets of which Chomsky talks are in fact merely tokens in the same way that the numbers that I manipulate when paying for my coffee in the morning are.

      Delete
    2. Sorry, FPT means 'fixed parameter tractability' which is an update of the classic complexity theory to deal with problems that vary in size in different ways. Largely irrelevant to the main thrust of the argument.

      Delete
    3. One thing you asked that I didn't reply -- you asked where the tokens were. So let's say that they are stored in the left frontal cortex (for the sake of argument). Given that there are only (say) 100,000 words and that the information that can be stored in the frontal cortex is maybe in the petabytes, this doesn't seem to be a problem. But that probably isn't the point you were making.

      As to Chomsky's claim that the whole lexicon is innate, I certainly do not support this claim which is bizarre, implausible and unsupported by any evidence whatsoever.
      I don't know how widely shared it is. Most minimalists seem to keep quiet about it. But that is again a separate issue.

      Delete
    4. This comment has been removed by the author.

      Delete
    5. Just a very brief reply; Postal is not objecting to views that distinguish between language and knowledge of language [though he may object to specific proposals of some of those views].

      The incoherence problem arises for a view like Chomsky's that claims there is no difference between language and knowledge of language and ALSO insists that language is set theoretic. Such a view could not even be salvaged by your 100,000 word proposal because we have no way to limit the lexicon for any language. A trivial example is the following sentence:

      [1] The German saying 'Sei kein Frosch' is translated into English literally 'Don't be a frog' but it means 'Don't be a spoilsport'.

      We probably agree that [1] is a sentence of English even though it contains a German clause. If we accept that we need to add all of German [and via parallel arguments every other language] to your 100,000 English words [because there is no a priori limit to words that can be used in those kinds of sentences]. And this is just one of several similar arguments [e.g., insertion of nonsense words, animal sounds and a host of other possibilities that one would need to account for] made by Postal [2004] also cited by Jackendoff & Culicover [2005]. So for Chomsky it would be impossible to accommodate such an extensive token collection and even for someone who is willing to move tokens out of the brain it may be awfully difficult if not impossible to set any size limit for 'amount of words'

      Delete
    6. You are assuming a Platonist view of language. From a cognitive view of language, the lexicon is just what some individual knows about the words in his or her native tongue. I speak English, but I don't know (or didn't anyway) that Frosch is a word in German. Frosch is not represented anywhere in my brain; there is no token of it, until I encounter it. So I certainly don't accept that there is a token of every word in every possible language, in my brain. That would be ridiculous, I agree.

      Delete
  11. This comment has been removed by a blog administrator.

    ReplyDelete