Comments

Showing posts with label Language and Mind. Show all posts
Showing posts with label Language and Mind. Show all posts

Friday, January 6, 2017

Inchoate minimalism

Chomsky often claims that the conceptual underpinnings of the Minimalist Program (MP) are little more than the injunction to do good science. On this view the eponymous 1995 book did not break new ground, or announce a new “program” or suggest foregrounding new questions. In fact, on this view, calling a paper A Minimalist Program for Linguistic Theory was not really a call to novelty but a gentle reminder that we have all been minimalists all along and that we should continue doing exactly what we had been doing so well to that point. This way of putting things is (somewhat) exaggerated. However, versions thereof are currently a standard trope, and though I don’t buy it, I recently found a great quote in Language and Mind (L&M) that sort of supports this vision.[1] Sorta, kinda but not quite.  Here’s the quote (L&M:182):

I would, naturally, assume that there is some more general basis in human mental structure for the fact (if it is a fact) that languages have transformational grammars; one of the primary scientific reasons for studying language is that this study may provide some insight into general properties of mind. Given those specific properties, we may then be able to show that transformational grammars are “natural.” This would constitute real progress, since it would now enable us to raise the problem of innate conditions on acquisition of knowledge and belief in a more general framework….

This quote is pedagogical in several ways. First, it does indicate that at least in Chomsky’s mind, GG from the get-go had what we could now identify as minimalist ambitions. The goal as stated in L&M is not only to describe the underlying capacities that make humans linguistically facile, but to also understand how these capacities reflect the “general properties of mind.” Furthermore, L&M moots the idea that understanding how language competence fits in with our mental architecture more generally might allow us to demonstrate that “transformational grammar is “natural”.” How so? Well in the obviously intended sense that a mind with the cognitive powers we have would have a faculty of language in which the particular Gs we have would embody a transformational component. As L&M rightly points out, being able to show this would “constitute real progress.” Yes it would.

It is worth noting that the contemporary conception of Merge as combining both structure building and movement in the “simplest” recursive rule is an attempt to make good on this somewhat foggy suggestion. If by ‘transformations’ we intend movement, then showing how a simple conception of recursion comes with a built in operation of displacement goes some distance in redeeming the idea that transformational Gs are “natural.”[2]

Note several other points: The L&M quote urges a specific research strategy: if you are interested in general principles of cognition then it is best to start the investigation from the bottom up. So even if one’s interest is in cognition in general (and this is clearly the L&M program) then right direction of investigation is not from, e.g. some a priori conception of learning to language but from a detailed investigation of language to the implications of these details for human mental structure more generally. This, of course, echoes Chomsky’s excellent critiques of Empiricism and its clearly incorrect and/or vacuous conceptions of reinforcement learning. 

However, the point is more general I believe. Even if one is not Empiricistically inclined (as no right thinking person should be) the idea that a body of local doctrine concerning a specific mental capacity is an excellent first step into probing possibly more general capacities seems like excellent method. After all, it worked well in the “real” sciences (e.g. Galileo’s, Copernicus’ and Kepler’s laws were useful stepping stones to Newton’s synthesis) so why not adopt a similar strategy in investigating the mind/brain? One of GGs lasting contributions to intellectual life was to demonstrate how little we reflexively know about the structure of our mental capacities. Being gifted linguistically does not imply that we know anything about how our mind/brain operates. As Chomsky likes to say, being puzzled about the obvious is where thinking really begins and perhaps GG’s greatest contribution has been to make clear how complex our linguistic capacities are and how little we understand about its operating principles.

So is the Minimalist Program just more of the same, with nothing really novel here? Again, I think that the quote above shows that it is not. L&M clearly envisioned a future where it would be useful to ask how linguistic competence fits into cognition more broadly. However, it also recognized that asking such “how” questions was extremely premature. There is a tide in the affairs of inquiry and some questions at some times are not worth asking. To use a Chomsky distinction, some questions raise problems and some point to mysteries. The latter are premature and one aim of research is to move questions from the second obscure mystical column to the first tractable one. This is what happened in syntax around 1995; the more or less rhetorical question Chomsky broached in L&M in the late 60s became a plausible topic for serious research in the mid 1990s! Thus, though there is a sense in which minimalism was old hat, there is a more important sense in which it was entirely new, not as regards general methodological concerns (one always values simplicity, conciseness, naturalness etc) but in being able to ask the question that L&M first posed fancifully in a non-trivial way: how does/might FL fit together with cognition more generally?

So what happened between 1968 and 1995? Well, we learned a lot about the properties of human Gs and had plausible candidate principles of UG (see here for some discussion). In other words, again to use Chomsky’s framing (following the chemist Davy), syntax developed a “body of doctrine” and with this it became possible to use this body of doctrine to probe the more general question. And that’s what the Minimalist Program is about. That’s what’s new. Given some understanding of what’s in FL we can ask how it relates to cognition (and computation) more generally. That’s why asking minimalist questions now is valuable while asking them in 1967 would have been idle.

As you all know, there is a way of framing the minimalist questions in a particularly provocative way, one that fires the imagination in useful ways: How could this kind of FL with these kinds of principles have evolved? On the standard assumption (though not uncontroversial, see here on the “phenotypic gambit”) that complexity and evolvability are adversarial, the injunction to simplify FL by reducing its linguistically proprietary features becomes the prime minimalist project. Of course, all this is potentially fecund to the degree that there is something to simplify (i.e. some substantive proposals concerning what the operative FL/UG principles are) and targets for simplification became worthwhile targets in the early 1990s.[3] Hence the timing of the emergence of MP.

Let me end by ridding off on an old hobbyhorse: Minimalism does not aim to be a successor to earlier GB accounts (and its cousins LFG, HPSG etc). Rather MP’s goal is  to be a theory of possible FL/UGs. It starts from the assumption that the principles of UG articulated from 1955-1990s are roughly correct, albeit not fundamental. They must be derived from more general mental principles/operations (to fulfill the L&M hope). MP is possible because there is reason to think that GB got things roughly right. I actually do think that this is correct. Others might not. But it is only once there is such a body of FL/UG doctrine that MP projects will not be hopelessly premature. As the L&M quote indicates, MP like ambitions have been with us for a long time, but only recently has it been rational to hope that they would not be idle.



[1] Btw, L&M is a great read and those of you who have never dipped in (and I am looking at anyone under 40 here) should go out and read it.
[2] And if we go further and assume that all non-local dependencies are mediated by ((c)overt) movement then all variety of transformations are the product of the same basic “natural” process. Shameless plug: this is what this suggests we do.
[3] Why then? Because by then we had good reasons for thinking that something like GB conception of UG was empirically and theoretically well-grounded. See here (and four following entries) for discussion.

Wednesday, July 20, 2016

Linguistic creativity 2

Here’s part 2. See here for part 1.

L&M identifies two other important properties that were central to the Cartesian view.

First, human linguistic usage is apparently free from stimulus control “either external or internal.” Cartesians thought that animals were not really free, animal behavior being tightly tied to either environmental exigencies (predators, food location) or to internal states (being hungry or horny). The law of effect is a version of this view (here). I am dubious that this is actually true of animals. And, I recall a quip from an experimental psych friend of mine that claimed that the first law of animal behavior is that the animal does whatever it damn well pleases. But, regardless of whether this is so for animals, it is clearly true of humans as manifest in their use of language. And a good thing too, L&M notes. For this freedom from stimulus control is what allows “language to serve as an instrument of thought and self-expression,” as it regularly does in daily life.

L&M notes that Cartesians did not take unboundedness or freedom from stimulus control to “exceed the bounds of mechanical explanation” (12). This brings us to the third feature of linguistic behavior: the coherence and aptness of everyday linguistic behavior. Thus, even though linguistic behavior is not stimulus bound, and hence not tightly causally bound to external or internal stimuli, linguistic behavior is not scattershot either. Rather it displays “appropriateness to the situation.” As L&M notes, it is not clear exactly how to characterize condign linguistic performance, though “there is no doubt that these are meaningful concepts…[as] [w]e can distinguish normal use of language from the ravings of a lunatic or the output of a computer with a random element” (12). This third feature of linguistic creativity, its aptness/fit to the situation without being caused by it was, for Cartesians, the most dramatic expression of linguistic creativity.

Let’s consider these last two properties a little more fully: (i) stimulus-freedom (SF) and (ii) apt fit (AF).

Note first that both kinds of creativity though expressed in language, are not restricted to linguistic performances. It’s just that normal language use provides everyday manifestations of both features.

Second, the sources of both these aspects of creativity are, so far as I can tell, still entirely mysterious. We have no idea how to “model” either SF or AF in the general case. We can, of course, identify when specific responses are apt and explain why someone said what they did on specific occasions. However, we have no general theory that illuminates the specific instances.[1] More precisely, it’s not that we have poor theories, it’s that we really have no theories at all. The relevant factors remain mysteries, rather than problems in Chomsky’s parlance. L&M makes this point (12-13):

Honesty forces us to admit that we are as far today as Descartes was three centuries ago from understanding just what enables a human to speak in a way that is innovative, free from stimulus control, and also appropriate and coherent.

The intractability of SF and AF serves to highlight the importance of the competence/performance distinction. The study of competence is largely insulated from these mysterious factors.  How so? Well, it abstracts away from use and studies capacities, not their exercise. SF and PF are not restricted to linguistic performances and so are unlikely intrinsically linked to the human capacity for language. Hence detaching the capacity should not (one hopes) corrupt its study, even if how competence is used for the free expression of thought remains obscure.

The astute reader will notice that Chomsky’s famous review of Skinner’s Verbal Behavior (VB) leaned heavily on the fact of SF. Or more accurately, the review argued that it was impossible to specify the contours of linguistic behavior by tightly linking it to environmental inputs/stimuli or internal states/rewards. Why? Cartesians have an answer: the Skinnerian project is hopeless. Our behavior is both SF and AF, our verbal behavior included. Hence any approach to language that focuses on behavior and its immediate roots in environmental stimuli and/or rewards is doomed to failure. Theories built on supposing that SF or AF are false will either be vacuous or evidently false. Chomsky’s critique showed how VB embodied the twin horns of this dilemma. Score one for the Cartesians.

One last point and I quit. Chomsky’s expansive discussion of the various dimensions of linguistic creativity may shed light on “Das Chomsky Probleme.” This is the puzzle of how, or whether, two of Chomsky’s interests, politics and linguistics, hook up. Chomsky has repeatedly (and IMO, rightly) noted that there is no logical relation between his technical linguistic work and his anarchist political views. Thus, there is no sense in which accepting the competence/performance distinction or thinking that TGG is required as part of any solution to linguistic creativity or thinking that there must be a language dedicated FL to allow for the facts of language acquisition in any way imply that we should organize societies on democratic bases in which all participants robustly participate, or vice versa. The two issues are logically and conceptually separate.

This said, those parts of linguistic creativity that the Cartesians noted and that remain as mysterious to us today as when they were first observed can ground a certain view of politics. And Chomsky talks about this (L&M:102ff). The Cartesian conception of human nature as creative in the strong Cartesian sense of SF and AF leads naturally to the conclusion that societies that respect these creative impulses are well suited to our nature and that those that repress them leave something to be desired. L&M notes that this creative conception lies at the heart of many Enlightenment and, later, Romantic conceptions of human well-being and the ethics and politics that would support expression of these creative capacities. There is a line of intellectual descent from Descartes through Rousseau to Kant that grounds respect for humans in the capacity for this kind of “freedom.” And Chomsky is clearly attracted to this idea. However, and let me repeat, however, Chomsky has nothing of scientific substance to say about these kinds of creativity, as he himself insists. He does not link his politics to the fact that humans come with the capacity to develop TGGs. As noted, TGGs are at right angles to SF and AF, and competence abstracts away from questions of behavior/performance where SF and AF live. Luckily, there is a lot we can say about capacities independent of considering how these capacities are put to use. And that is one important point of L&M’s extended discussion of the various aspects of linguistic creativity. That said, these three conceptions connect up in Cartesian conceptions of human nature, despite their logical and conceptual independence and so it is not surprising that Chomsky might find all three ideas attractive even if they are relevant for different kinds of projects. Chomsky’s political interests are conceptually separable from his linguistic ones. Surprise, surprise it seems that he can chew gum and walk at the same time!

Ok, that’s it. Too long, again. Take a look at the discussion yourself. It is pretty short and very interesting, not the least reason being how abstracting away from deep issues of abiding interest is often a pre-condition for opening up serious inquiry. Behavior may be what interests us, but given SF and AF is has proven to be refractory to serious study. Happily, studying the structure of the capacity independent of how it is used has proven to be quite a fertile area of inquiry. It would be a more productive world were these insights in L&M more widely internalized by the cog-neuro-ling communities.


[1] The one area where SFitude might be relevant regards the semantics of lexical items. Chomsky has argued against the denotational theories of meaning in part by noting that there is no good sense in which words denote things. He contrasts this with “words” in animal communication systess. As Chomsky has noted, how lexical items work “pose deep mysteries,” something that referential theories do not appreciate. See here for references and discussion.  

Wednesday, July 13, 2016

Linguistic creativity 1

Once again, this post got away from me, so I am dividing it into two parts.

As I mentioned in a recent previous post, I have just finished re-reading Language & Mind (L&M) and have been struck, once again, about how relevant much of the discussion is to current concerns. One topic, however, that does not get much play today, but is quite well developed in L&M is it’s discussion of Descartes’ very expansive conceptions of linguistic creativity and how it relates to the development of the generative program. The discussion is surprisingly complex and I would like to review its main themes here. This will reiterate some points made in earlier posts (here, here) but I hope it also deepens the discussion a bit.

Human linguistic creativity is front and center in L&M as it constitutes the central fact animating Chomsky’s proposal for Transformational Generative Grammar (TGG). The argument is that a TGG competence theory is a necessary part of any account of the obvious fact that humans regularly use language in novel ways. Here’s L&M (11-12):

…the normal use of language is innovative, in the sense of much of what we say in the course of normal use is entirely new, not a repetition of anything that we have heard before and not even similar in pattern - in any useful sense of the terms “similar” and “pattern” – to sentences or discourse that we have heard in the past. This is a truism, but an important one, often overlooked and not infrequently denied in the behaviorist period of linguistics…when it was almost universally claimed that a person’s knowledge of language is representable as a stored set of patterns, overlearned through constant repetition and detailed training, with innovation being at most a matter of “analogy.” The fact surely is, however, that the number of sentences in one’s native language that one will immediately understand with no feeling of difficulty or strangeness is astronomical; and that the number of patterns underlying our normal use of language and corresponding to meaningful and easily comprehensible sentences in our language is order of magnitudes greater than the number of seconds in a lifetime. It is in this sense the normal use of language is innovative.

There are several points worth highlighting in the above quote. First, note that normal use is “not even similar in pattern” to what we have heard before.[1] In other words, linguistic competence is not an instance of pattern matching or recognition in any interesting sense of “pattern” or “matching.”  Native speaker use extends both to novel sentences and to novel sentence patterns effortlessly. Why is this important?

IMO, one of the pitfalls of much work critical of GG is the assimilation of linguistic competence to a species of pattern matching.[2] The idea is that a set of templates (i.e. in L&M terms: “a stored set of patterns”) combined with a large vocabulary can easily generate a large set of possible sentences in the sense of templates saturated by lexical items that fit. [3] Note, that such templates can be hierarchically organized and so display one of the properties of natural language Gs (i.e. hierarchical structures).[4] Moreover, if the patterns are extractable from a subset of the relevant data then these patterns/templates can be used to project novel sentences. However, what the pattern matching conception of projection misses is that the patterns we find in Gs are not finite and the reason for this is that we can embed patterns within patterns within patterns within…you get the point. We can call the outputs of recursive rules “patterns” but this is misleading for once one sees that the patterns are endless, then Gs are not well conceived of as collections of patterns but collections of rules that generate patterns. And once one sees this, then the linguistic problem is (i) to describe these rules and their interactions and (ii) to further explain how these rules are acquired (i.e. not how the patterns are acquired).

The shift in perspective from patterns (and patternings in the data (see note 5)) to generative procedures and the (often very abstract) objects that they manipulate changes what the acquisition problem amounts to. One important implication of this shift of perspective is that scouring strings for patterns in the data (as many statistical learning systems like to do) is a waste of time because these systems are looking for the wrong things (at least in syntax).[5] They are looking for patterns whereas they should be looking for rules. As the output of the “learning” has to be systems of rules, not systems of patterns, and as rules are, at best, implicit in patterns, not explicitly manifest by them, theories that don’t focus on rules are going to be of little linguistic interest.[6]

Let me make this point another way: unboundedness implies novelty, but novelty can exist without unboundedness. The creativity issue relates to the accommodation of novel structures. This can occur even in small finite domains (e.g. loan words in phonology might be an example). Creativity implies projection/induction, which must specify a dimension of generalization along which inputs can be generalized so as to apply to instances beyond the input. This, btw, is universally acknowledged by anyone working on learning. Unboundedness makes projection a no-brainer. However, it also has a second important implication. It requires that the generalizations being made involve recursive rules. The unboundedness we find in syntax cannot be satisfied via pattern matching. It requires a specification of rules that can be repeatedly applied to create novel patterns. Thus, it is important to keep the issue of unboundedness separate from that of projection. What makes the unboundedness of syntax so important is that it requires that we move beyond the pattern-template-categorization conception of cognition.

Dare I add (more accurately, can I resist adding) that pattern matching is the flavor of choice for the Empricistically (E) inclined. Why? Well, as noted, everyone agrees that induction must allow generalization beyond the input data. Thus even Es endorse this for Es recognize that cognition involves projection beyond the input (i.e. “learning”). The question is the nature of this induction. Es like to think that learning is a function from input to patterns abstracted from the input, the input patterns being perceptually available in their patternings, albeit sometimes noisily.[7] In other words, learning amounts to abstracting a finite set of patterns from the perceptual input and then creating new instances of those patterns by subbing novel atoms (e.g. lexical items) into the abstracted patterns. E research programs amount to finding ways to induce/abstract patterns/templates from the perceptual patternings in the data. The various statistical techniques Es explore are in service of finding these patterns in the (standardly, very noisy) input. Unboundedness implies that this kind of induction is, at best, incomplete. Or, more accurately, the observation that the number of patterns is unbounded implies that learning must involve more than pattern detection/abstraction. In domains where the number of patterns is effectively infinite, learning[8] is a function from inputs to rules that generate patterns, not to patterns themselves. See link in note 6 for more discussion.

An aside: Most connectionist learners (and deep learners) are pattern matchers and, in light of the above, are simply “learning” the wrong things. No matter how many “patterns” the intermediate layers converge on from the (mega) data they are exposed to they will not settle on enough given that the number of patterns that human native speakers are competent in is effectively unbounded. Unless the intermediate layers acquire rules that can be recursively applied they have not acquired the right kinds of things and thus all of this modeling is irrelevant no matter how much of the data any given model covers.[9]

Another aside: this point was made explicitly in the quote above but to no avail. As L&M notes critically (11): “it was almost universally claimed that a person’s knowledge of language is representable as a stored set of patterns, overlearned through constant repetition and detailed training.” Add some statistical massaging and a few neural nets and things have not changed much. The name of the inductive game in the E world is to look for perceptual available patterns in the signal, abstract them and use them to accommodate novelty. The unboundedness of linguistic patterns that L&M highlights implies that this learning strategy won’t suffice the language case, and this is a very important observation.

Ok, back to L&M

Second, the quote above notes that there is no useful sense of “analogy” that can get one from the specific patterns one might abstract from the perceptual data to the unbounded number of patterns with which native speakers display competence. In other words, “analogy” is not the secret sauce that gets one from input to rules So, when you hear someone talk about analogical processes reach for your favorite anti-BS device. If “analogy” is offered as part of any explanation of an inferential capacity you can be absolutely sure that no account is actually being offered. Simply put, unless the dimensions of analogy are explicitly specified the story being proffered is nothing but wind (in both the Ecclesiastes and the scatological sense of the term).

Third, the kind of infinity human linguistic creativity displays has a special character: it is a discrete infinity. L&M observes that human language (unlike animal communication systems) does not consist of a “fixed, finite number of linguistic dimensions, each of which is associated with a particular nonlinguistic dimension in such a way that selection of a point along the linguistic dimension determines and signals selection of a point along the associated nonlinguistic dimension” (69). So, for example, higher pitch or chirp being associated with greater intention to aggressively defend territory or the way that “readings of a speedometer can be said, with an obvious idealization, to be infinite in variety” (12). 

L&M notes that these sorts of systems can be infinite, in the sense of containing “an indefinitely large range of potential signals.” However, in such cases the variation is “continuous” while human linguistic expression exploits “discrete” structures that can be used to “express indefinitely many new thoughts, intentions, feelings, and so on.”  ‘New thoughts’ in the previous quote clearly meaning new kinds of thoughts (e.g. the signals are not all how fast the car is moving). As L&M makes clear, the difference between these two kinds of systems is “not one of “more” or “less,” but rather of an entirely different principle of organization,” one that does not work by “selecting a point along some linguistic dimension that signals a corresponding point along an associate nonlinguistic dimension.” (69-70).

In sum, human linguistic creativity implicates something like a TGG that pairs discrete hierarchical structures relevant to meanings with discrete hierarchical structures relevant to sounds and does so recursively. Anything that doesn’t do at least this is going to be linguistically irrelevant as it ignores the observable truism that humans are, as matter of course, capable of using an unbounded number of linguistic expressions effortlessly.[10] Theories that fail to address this obvious fact are not wrong. They are irrelevant.

Is hierarchical recursion all that there is to linguistic creativity? No!! Chomsky makes a point of this in the preface to the enlarged edition of L&M. Linguistic creativity is NOT identical to the “recursive property in generative grammars” as interesting as such Gs evidently are (L&M: viii). To repeat, recursion is a necessary feature of any account aiming to account for linguistic creativity, BUT the Cartesian conception of linguistic creativity consists of far more than what even the most explanatorily adequate theory of grammar specifies.  What more?



[1] For an excellent discussion of this see Jackendoff’s very nice (though unfortunately (mis)named) Patterns in the mind (here).  It is a first rate debunking of the idea that linguistic minds are pattern matchers.
[2] This is not unique to the linguistic cognition. Lots of work in cog sci seems to identify higher cognition with categorization and pattern matching. One of the most important contributions of modern linguistics to cog sci has been to demonstrate that there is much more to cognition than this. In fact, the hard problems have less to do with pattern recognition than with pattern generation via rules of various sorts.  See notes 5 and 6 for more off handed remarks of deep interest.
[3] I suspect that some partisans of Construction Grammar fall victim to the same misapprehension.
[4] Many cog-neuro types confuse hierarchy with recursion. A recent prominent example is in Frankland and Greene’s work on theta roles. See here for some discussion. Suffice it to say, that one can have hierarchy without recursion, and recursion without hierarchy in the derived objects that are generated. What makes linguistic objects distinctive is that they are the products of recursive processes that deliver hierarchically structured objects.
[5] Note that unbounded implies novelty, but novelty can exist without unboundedness. The creativity issue relates to easy handling of novel structures. This can occur even in small finite domains. Creativity implies projection, which must specify a dimension of generalization along which inputs can be extended to apply to instances beyond the input. Unboundedness makes projection a no-brainer. It further implies that the generalization involves recursive rules. Unboundedness cannot be pattern matching. It requires a specification of rules that can be repeatedly applied to create novel patterns. Thus, it is important to keep the issue of unboundedness separate from that of projection. What makes the unboundedness of syntax so important is that it requires that we move beyond the pattern-template-categorization conception of cognition.
[6] It is arguable that some rules are more manifest in the data that others are and so are more accessible to inductive procedures. Chomsky makes this distinction in L&M, contrasting surface structures which contains “formal properties that are explicit in the signal” to deep structure and transformations for which there is very little to no such information in the signal (L&M:19). For another discussion of this distinction see (here).
[7] Thus the hope of unearthing phrases via differential intra-phrase versus inter-phrase transition probabilities.
[8] We really should distinguish between ‘learning’ and ‘acquisition.’ We should reserve the first term for the pattern recognition variety and adopt the second for the induction to rules variety. Problems of the second type call for different tools/approaches than those in the first and calling both ‘learning’ merely obscures this fact and confuses matters.
[9] Although this is a sermon for another time, it is important to understand what a good model does: it characterizes the underlying mechanism. Good models model mechanism, not data. Data provides evidence for mechanism, and unless it does so, it is of little scientific interest. Thus, if a model identifies the wrong mechanism not matter how apparently successful in covering data, then it is the wrong model. Period. That’s one of the reasons connectionist models are of little interest, at least when it comes to syntactic matters.
            I should add, that analogous creativity concerns drive Gallistel’s arguments against connectionist brain models. He notes that many animals display an effectively infinite variety of behaviors in specific domains (caching behavior in birds or dead reckoning in ants) and that these cannot be handled by connectionist devices that simply track the patterns attested. If Gallistel is right (and you know that I think he is) then the failure to appreciate the logic of infinity makes many current models of mind and brain beside the point.
[10] Note that unbounded implies novelty, but novelty can exist without unboundedness. The creativity issue relates to easy handling of novel structures. This can occur even in small sets. Creativity implies projection which must specify a dimension of generalization along which inputs can be extended to apply to instances beyond the input. Unboundedness makes projection a no-brainer. It further implies that the generalization is due to recursive rules that require more than establishing a fixed number of patterns that can be repeatedly filled to create novel instances of that pattern.