Comments

Wednesday, August 27, 2014

Nativism, Rationalism and Empiricism-1

There are two different kinds of arguments for a Rationalist approach to the study of mind. The first, so far as I can tell, is virtually tautological. The second is quite substantive.  What are they? I’ll try to lay them out in a couple of posts. This one here discusses the “tautology.”

The tautological version is well laid out in the Royaumont conference papers (here) and how they relate to the innateness “controversy.”  I put the latter in quote marks because most of the participants (including and especially Fodor and Chomsky, the so-called hard core nativists) considered the idea that the mind has innate structure nothing a simple tautology. Indeed, this is how Fodor and Chomsky repeatedly refer to the “innateness hypothesis” (see e.g. 263, 268). It’s tautological that the mind (and brain) has structure and biases as without such there can be no induction whatsoever and it is taken for granted that biological systems are constantly inducing (viz. engaging in non-demonstrative inference).  This said, it is interesting to re-read the discussions for despite this general agreement, there is lots of intellectual toing and froing. Why? Because as Chomsky puts it (see Fodor’s version p. 268):

What is important is not just to see that something is a “tautology,” but also to see its import. (262)

What’s the import, as Fodor and Chomsky understood things?  That there is no “learning” without a set of given projectable predicates that undergird it. Or, more accurately, as Fodor puts it, “ the very idea of concept learning is confused” (143).  And the confusion? Two related, but importantly different, concepts have been run together; concept acquisition (CA) and belief fixation (BF). Regarding the former, we have no theory of how concepts are acquired. What we have are theories of BF, which are, at the most general level, inductive logics of various kinds, which, by their nature, presuppose a given set of projectable predicates and so cannot themselves serve as theories of CA.  Or as Fodor puts it:

…no theory of learning that anybody has ever developed is, as far as I can see, a theory that tells you how concepts are acquired; rather such theories tell you how beliefs are fixed by experiences – they are essentially inductive logics. That kind of mechanism, which shows how beliefs are fixed by experiences, makes sense only against the background of radical nativism. (144).

Fodor and Chomsky (and most of the other participants at the Royaumont conference I might add if the comments section is any indication) believe that the above is a virtual tautology. All theories of learning are selective (i.e. stories where the given hypothesis space proposes and the incoming experience disposes).[1] Where tautology ends and (some of the) hard work begins is to specify the set of projectable predicates that are in fact biologically/cognitively given (i.e. the shape and content of the hypothesis space that BFers actually bring to the “learning” problem).  To repeat, no given space of alternatives, no way for an inductive logic or theory of BF to operate. The account of what is given is (or is a very good part of) a theory of the relevant biases.[2]

Fodor and Chomsky pull several important consequences out of this tautology.

First, that many positions confidently explored in the cognitive literature are strictly speaking incoherent as expressed. Fodor discusses the “Piagetian” view that developmental conceptual change is a learning process in which learning replaces earlier conceptually weaker stages with subsequent conceptually stronger ones.  Fodor argues that this position is, very simply, conceptually untenable. It is not untenable that development involves a succession of stages where the ith stage is conceptually stronger than the ith-1 stage. Rather it is untenable that this development is a result of stronger concepts arising via induction (i.e. learning). Why, because for induction to be possible the conceptually stronger system has to be representable. But to be representable means that the concepts that represent it must be cognitively available (must already be in the hypothesis space). But if so, they cannot enter the hypothesis space by induction as they are already available for induction.  So, development cannot be a matter of CA via learning. 

Does this mean that we development cannot be a matter of stronger concepts being acquired over time? No. But it does mean that this process cannot be inductive (e.g. this scenario is compatible with “maturing” new concepts, just not learning new ones).  Note too, that this is compatible with treating development as a matter of new belief fixation. But recall that BF implies that the relevant concepts are given and available for computational use. Or, as Fodor puts it:

…a theory of conceptual plasticity of organisms must be a theory of how the environment selects among the innately specified concepts. It is not a theory of how you acquire concepts, but a theory of how the environment determines which parts of the conceptual mechanism in principle available to you are in fact exploited. (151)

In other words:

…fixation of belief along the lines of inductive logic…is one that assumes the most radical possible nativism: namely that any engagement of the organism with its environment has to be mediated by the availability of any concept that can eventually turn up in the organism’s belief. The organism is a closed system proposing hypotheses to the world, and the world then chooses among them in terms explicated by some inductive logic. (152)

To repeat, Fodor and Chomsky and virtually all the participants at the Royaumont conference take this to be tautological (as do I). The only theories of learning we have are theories of BF and these theories all presuppose that the stock of possible acquirable concepts is given.  Radical nativism indeed![3]

So far as I can tell, the logic that Fodor and Chomsky outlined well over 30 years ago has not changed. And, if this is correct, then the central problem in cognition, linguistics being a special case, is to adumbrate the relevant hypothesis space for any given domain. And the only way to do this is to investigate the acquirable in terms of the acquired and argue backwards. If BF is the name of the game, then what is presupposed had better suffice to deliver the concepts acquired, and once one looks carefully at what’s on offer, this simple requirement appears to rule out most of the most popular theories, or so Fodor and Chomsky (and I) would argue.

It is worth observing that this tautology was recognized by the great empiricist philosophers.  In this sense, the blank tablet metaphor generally associated with their theories of mind is unhelpful at best and misleading at worst.  The distinguishing mark of empiricism is not that the mind is unstructured (comes with no given hypothesis space) but that the dimensions of the hypothesis space are entirely sensory.  On this view, admissible concepts are either sensory primitive concepts or Boolean combinations of such.  This is a substantive theory, and, as Fodor notes, it has proven to be false.[4] Or as Fodor in his characteristic elegant way puts it:

I consider that the notion that most human concepts are decomposable into a small set of simple concepts –let’s say, a truth function of sensory primitives – has been exploded by two centuries of philosophical and psychological investigation. In my opinion, the failure of the empiricist program of reduction is probably the most important result of those two hundred years in the area of cognition. (268)

As many of you know, Fodor has argued that not only is there no possible reduction to a small number of sensory primitives, but that there is very little possible reduction at all, at least when it comes to our basic lexical concepts. I personally find his arguments against reductions rather strong.[5] However, whether one buys the conclusion, the form of the argument seems to me correct: if you want a restricted set of primitives then you are obliged to show how these can be used to build up what we actually observe. The empiricist restriction to a small base of sensory primitives failed to deliver the goods, therefore, it cannot be correct; it cannot be the case that our basic concepts are restricted to sensory primitives.

So, is nativism ineluctable? Yup.  So what’s the fight between Rationalists and Empiricists about? It’s about two things: (i) the shape of the hypothesis space: what are the primitive projectable predicates and how do they combine to deliver more complex predicates (e.g. what are the basic operations, primitives and principles of UG) (ii) how, given this space, are beliefs fixed (e.g. what is the relation between PLD and G)? Everyone is a nativist when it comes to CA. This is not controversial (or shouldn’t be). Empiricists are nativists that believe in a pretty spare hypothesis space. Rationalists are happy to entertain far more complex ones.  This difference has an impact on how one understands BF. I turn to this in the next post.



[1] The distinction between instructive and selective theories has a long history in the study of the immune system.  Here is a useful summary. Fodor’s point, which seems to me to be entirely correct (and obvious) is that all current theories of learning are selectionist and hence presuppose a fixed innate background of relevant alternatives.
[2] There may be room, in addition, for accounts of how to use incoming data to update the information that guides a learners movements across the given hypothesis space.  What kinds of evidential thresholds are there, how many competing hypothesis does one juggle at once, what are the functions that in/decrease a hypothesis’ credibility “score,” how many “kinds” of evidence are tabulated at once, does the credibility function treat all hypotheses the same or are some more privileged than others, etc.? These are all relevant concerns. But Fodor and Chomsky’s tautological point is that they all are secondary to the issue of what does the hypothesis space look like.
[3] This conclusion is still resisted by many. See for example, Gallistel’s review of Sue Carey’s book here.
            Others also seem to misunderstand the import of this. For example Amy Perfors (here) claims that Fodor’s point is “true but trivial” (132). This is taken to be a critique, but it is exactly Fodor’s point. As Perfors emphasizes: “…any conception which relies on not having a built-in hypothesis space is incoherent…” (128). This is a vigorous rewording of Fodor’s and Chomsky’s point.  It is curious how often one finds strongly worded criticisms of nativist positions followed immediately with these criticized positions offered as novel insights by the very same author, in the very same paper.
[4] Once again some have confused the issues at hand. Perfors (see above) is a good example. The paper contrasts Nativism and Empiricism (127). But if everyone is a nativist with respect to the requirement that for learning to be possible a hypothesis space must be given, then everyone must be a nativist, in Fodor and Chomsky’s sense.  The debate is not over whether we are nativists, but what kind of nativists we are (i.e. how rich a hypothesis space are we willing to tolerate).  The contrast is between Rationalists and Empiricists, the latter limiting the admissible predicates and operations to associationist ones. And, as Fodor notes (see immediately below), this is what’s wrong with Empiricism. It’s not the nativism, but the associationism that makes empiricism a failed program.
[5] I hate to pick on the Perfors paper (well, not really) but it demonstrates how cavalier critics can be when it comes to positions that they consider clearly incorrect.  The paper argues that Fodor’s critique can be finessed by simply understanding that one can have hierarchies of hypothesis spaces (132-3). Thus, contra Fodor, it is possible to treat elements of level N as decomposed of concepts of level N+1 and this gets all that Fodor criticizes but without the unwanted implicational consequences.  Maybe. But oddly the paper never actually illustrates how this might be done. You know, take a concept or two and decompose them and show how they operate to license the wanted inferences and block the unwanted ones. There are lots of concepts around to choose from. Fodor has discussed a bunch. But the Perfors paper does nothing even approaching this. It simply asserts that conceptual hierarchies gets one around Fodor’s arguments.  This is cheap stuff, really cheap. But sadly, all too common.

Sunday, August 24, 2014

Cakes, Damn Cakes, and Other Baked Goods

As promised in my previous post, here's Omer's reply to my musings on derivations and representations.



In a recent post on this blog, Thomas Graf addresses the derivationalism vs. representationalism debate---sparked (Thomas' post, that is) by my remarks in the comments section of this post.

Thomas notes, among other things, that literally any derivational formalism can be recast representationally. For example, one could take the set of licit derivational operations in the former model, and turn them into representational well-formedness conditions on adjacent pairs of syntactic trees in an ordered sequence. (Each tree in this ordered sequence corresponds, informally, to an intermediate derivational step in the former model.) As best I can tell, this translatability of any derivational formalism into representational terms is not even debatable; I certainly wouldn't argue this point.

The Cake is a Lie

In the comments section to Norbert's final remarks on Chomsky's lecture series, Omer Preminger laments the resurrection of representational devices in Minimalism, which started with Chomsky (2004) "Beyond Explanatory Adequacy". I originally took Omer to argue against representational approaches on two levels:
  1. They are fundamentally flawed regarding both competence (bad empirical coverage) and performance (no plausible parsing model).
  2. Phase-theory is ill-motivated and doesn't get the job done.
After a short email conversation with Omer it actually became clear that this is not an accurate reflection of his views, which he will summarize in a guest post within the next few days summarizes in this follow-up post. Still, now that we've got those two claims laid out in front of us, let's assess their soundness.

I've got no qualms with the second claim. The motivation and empirical use of phases has frequently been criticized in the literature, and phases don't fare any better from a computational perspective. The memory reduction argument for phases is hogwash, phases have no discernible effect on generative capacity (neither weak nor strong), and they do not simplify the learning problem. Norbert captures the state of affairs ever so succinctly: Phases don't walk the walk [...] the rhetoric is often ahead of the results.

The first claim, on the other hand, I can't get behind at all. For one thing, it's a typical case of the cake fallacy: Every cake I've made in my life has been horrible, thus all cakes are horrible (and I'm more of a muffins guy anyways). Even if linguists haven't come up with any good representational models so far --- which I think many syntacticians would emphatically disagree with --- that doesn't mean the approach as such is intrinsically inferior to derivational ones. Now I can already see the flurry of posts about how theory construction is also a probabilistic process where previous failures of specific types of analysis make them less preferable, that the existence of a working representational account is moot if we can't find it, yada yada yada. Step back from your keyboards everyone, I'm actually not interested in arguing this point. My real worry is much more basic. The first claim implies, as is commonly done in linguistics, that there is a meaningful difference between representational and derivational accounts. Well, it turns out that this specific cake is a lie.

Friday, August 22, 2014

Boxes, arrows and insights

I have a habit of reading economics blogs. I'm not sure why, but I think it's because its a filed that grew up at about the same time that GG did, it has a technical side, and I'm friends with some economists. At any rate, here is a post by someone I have enjoyed for a long time. He's not an economist, but a finance guy from the City who went under the handle D-squared. Aside from making me laugh, the post also captured how I feel when I go to lots of psych talks (including many psycho-ling talks). Boxes and arrows, boxes and arrows: each connected to each several times over.

Thursday, August 21, 2014

Great news

Colin Phillips, my very talented and provocative colleague, has decided to indulge in occasional blogging. I managed to convince him to cross post at FoL. Here is his first post.

Saturday, August 16, 2014

Linguistics?

I read this piece in the NYTs last week about sexual harassment of women in various of the sciences.  I've always liked to think that linguistics was a bit of an exception in this regard, but I may have an obvious gender blind spot. So I thought I'd post and ask: how is linguistics as a filed doing with regard to the treatment of women in the field? And if we are not doing particularly well, or not well enough, what can we do about it?

Monday, August 4, 2014

Final comments on lecture 4

This ends the comments (here and here) on lecture 4.

The logic used to account for the EPP also covers Fixed Subject Condition effects (FSC).[1] Consider (2’) again:

(2’) *Who1 did John say that t1 saw Mary

The T needs to be labeled. If who moves there will be nothing in the Spec-T to agree with and so labeling will fail.  That’s Chomsky’s story. The obvious problem with this account is the absence of FSCs if there is no overt complementizer (I pointed to this in the comments to lecture 3). Chomsky addresses this problem here. He proposes that a deleted C is no longer a phase. More exactly, to delete a C you must transfer the feature that says that C is a phase to T. In effect, C deletion makes T the phase head. So not only do we lower phi and tense features form C to T, but phase-headedness as well.  This now ties FSCs to the presence of absence of an overt C.[2]

Observation 1: This story requires that that is deleted rather than not present at all. Were it never present, C could not transfer its features to T, and T has not features of its own (more below). Thus, to make this work, we need deletion operations in the syntax. A question that arises is how similar the operation deleting that is to more run of the mill ellipsis operations. The latter are generally treated as simply dephoneticization processes. This will not suffice here. It must be that getting rid of phonetic content requires that all features of C lower to T. For those with long enough memories, this smells a little of the old notion of “L-contains.”  At any rate, it’s worth observing that C deletion is not simply quieting the phonetics.

Observation 2: there are well known variations regarding that-t effects dialectally in English. This suggests that deletion might be sufficient for transferring all of Cs features to T but it is not necessary. So, contrary to what Chomsky suggests, the explanation requires that we say something “special” about these FSCs in English. IMO, things are a little worse than this. As I mentioned in earlier comments, many speakers seem to allow violations of the FSC in English even with if/whether in the C position (or at least so report a third of my syntax undergrads). There is no problem accommodating this by allowing feature lowering as Chomsky suggests. But this is now decoupled from phonetic articulation. Unfortunately, in the relevant dialects, deleting a that does not license null subjects, which one might have expected (4).  Or more exactly, why doesn’t lowering all the features of C to T serve to strengthen T? Note that Cs can label just fine without anything in Spec-C helping them along. Given this, why shouldn’t lowering all the features of C onto T (including the “phase head feature” see below) not make T as independent as C?  Dunno, but it doesn’t.
           
(1)  *John thinks (that) is a man here

In other words, the EPP and FSC do not really swing together, though they should if they were truly unified, one might suppose. 

Let’s put these questions to one side and continue. Chomsky then asks how we can get (2):

(2)  Who do you think t is kissing Sue

How do we label the lower “TP” if who moves. Chomsky says that it is labeled before C deletes and labels cannot be deleted. In other words, labels are indelible (think Lasnik and Saito).  Now, Chomsky really doesn’t like this way of putting things. What he wants to say is that CSs have phase sized memories (i.e. CS can recall what operations have taken place within a phase). In other words, there is phase level memory for all syntactic operations.  By lowering the phase-hood to T from C, the next higher v* phase can “remember” that the lower T was given a label via agree and so movement of the DP in Spec T is ok.  So, it is not that the labeling is indelible, but that all operations that happen within a phase are recollected in that phase. So, once labeled, memory tells us that it is always labeled. 

This emphasizes the computational aspects of phase theory.  What’s important about phases is that they reduce memory demands of a computation. The reverse of this is that it allows some memory of previous operations to be retained.  This is quite definitely not a conceptual argument. There is no conceptual motivation for these assumptions. The motivations are computational. The question becomes how we bring information forward in a derivation, how forgetting can be computationally efficient etc.  Phases and the properties Chomsky relies on here are entirely of this variety.

Comment: Two things: this is very Barrerish in spirit. Phases are no longer fixed, but change in the course of the derivation (as Den Dikken was the first to propose). And call it what you want, indelibility is back.  Moreover, just as in Barriers, T has a strange role in this system. It is not an inherent phase but can become one by inheritance. Sound familiar?[3] To me, this all has the feeling of a Rube Goldberg device, but this is partly a matter of taste. Some might think Barriers elegant. Go figure.

Let me make my unease clearer. It seems that Chomsky is not that happy with T and its various special properties within his system (again, just like T in Barriers).  He, in fact, proposes, that T has no properties of its own. It’s just there to receive properties from C. This makes T very similar to Agr in older MP, and recall that Chomsky argued that grammar internal formatives like Agr are to be eschewed. They cause DP headaches, as do any grammar internal formatives (viz. we need to explain how they and their properties got into FL).  Worse, IMO, the special properties of T are critical in Chomsky’s explanations of the EPP and FSC effects. But, this strikes me as a non-trivial problem for his account. Why doesn’t explaining these special properties of FL in terms of special properties of T not amount to re-description rather than explanation? One of the salutary effects of MP has been to warn us about confusing the two. However, the more T is special, the more accounts of Spec-T effects (EPP and FSCs) are weakened. And from what I can tell, Ts special properties are critical in deriving the results Chomsky obtains.

There are other Barrier like resonances here. Recall that in the Barriers framework, VP was never a barrier. Why? Because we could always adjoin to it and thereby void its barrierhood.  In the present story, there is also a big asymmetry between C and v*. The latter never displays FSC or ECP effects. Why not? Because v* always looses its phase property. How? Because Chomsky assumes that when the root raises to v*, v* gets buried inside the raised root and thereby looses its phase properties.  Though Chomsky does not discuss this, it raises the question of the role of v* in a phase based theory if it always looses its phase property. Do v*s no longer induce PIC effects?  It would be very odd to assume that the lower copy of the root inherited the phase property, like T inherits it from C. After all, this is the tail of a head chain and tails are generally grammatically inert.  So, one conclusion could be that v* never induces PIC effects on this revised account. Of course, once again, we can add technical fixes to obviate this conclusion, but, at least for me, their motivations will be empirical not conceptual.  This is not bad, but it does do some damage to the SMT line of reasoning Chomsky cherishes.

So, v* gets special treatment because of the properties of roots and T gets special treatment because its just, well, odd. The story may hang together, but it is hardly conceptually pretty, at least from where I sit.

Question: When R(V) raises to v* what happens to the phase property of v*? I assume that it is eliminated and this is why there are no EPP/FSC effects. Ok, does R(V) inherit the phase-hood property? This seems unlikely, as it is the tail of the head chain, but maybe.  If not, is the phase-hood of v* simply voided and what we are left with is one big C-phase?  If so, how does this enhance computational efficiency?[4]  

This post is already way too long. Let me end with three more highlights and maybe a remark or two.

Lecture 4 drops the idea that all grammatical action takes place at the phase head. In particular he allows I-merge to apply completely independently of any relationship to C. He does this to get rid of the counter-cyclic movement he needed in PoP. Recall counter-cyclic movement violates the NTC, which is a part of the conceptually best version of Merge. Chomsky really didn’t want to allow it and he eliminates here at the cost of abandoning the assumption that all grammatical operations are with the phase head. 

A consequence of this is that Chomsky has to abandon his explanation from PoP as to why Gs raise T to C but don’t raise DPs to C instead given that they are equally close if SLOs are without labels. You may recall, that he accounted for this by moving the subject to Spec T counter-cyclically in derivations that applied “all at once” at the phase level. 

I’m glad that Chomsky drops this now for two reasons. First, I never liked his earlier explanation (indeed I was part of a trio arguing that it didn’t work and was the wrong way to proceed (here)). Second, I never understood what “all at once” derivations meant. Ever.  I pretended to and taught it, but never got it.  Now I don’t have to, it seems. Curiously, Chomsky not only drops this analysis but states that it was always “artificial.” Yup.

Chomsky also finally abandons the last residues of Greed based MP theories. Merge is free. If you follow him here, you can stop worrying about what motivates this or that movement. Note, this is conceptually the right move (and I have thought this for a long time and even said so publically). Chomsky notes that E-merge is not subject to greed considerations and so I-merge shouldn’t be either, given that they are two instances of the very same operation.  Again, yup.  So much for probing and agreeing being a pre-condition for I-merge.

Does this mean that movement is never “for a reason”? Well it does mean that it is never for a local CS reason. Rather, we return once again to a “generate and filter” theory of computation, similar to what we had in GB. The main difference is that this time the filters are provided by Bare Output Conditions, in particular the requirement that all SLOs be labeled prior to Transfer and that MLA sometimes requires what is effectively a Spec-X configuration to provide a viable label. So, Probe/goal seems out and Spec-X is back. Plus ca change (and I know I’m missing a thingy under ‘c’).

This is enough. At least I’ve had enough. Let me once again end on a positive note. As you might have noticed, I am not (yet) convinced by Chomsky’s story here. I believe that he has mis-analyzed the role of labels in G.  His approach rests on the assumption that labels play no role in CS. They are only relevant to interface interpretation.  I currently believe that this is likely wrong. At the very least, I don’t really see how labels are required for CI (or SM) interpretive rules to apply. For CI, at least, all we need, so far as I can make out, is the branching structure and the contents of the individual atoms.  So the premise that motivates the MLA as a BOC seems (at least to me) very shaky.

However, though I am not that moved by Chomsky’s proposal, I am moved by his method and overall conception of the enterprise. I wholeheartedly agree that we should take Galileo’s Maxim as a strong boundary condition on theory.  I completely agree that we should respect GGs history and treat its generalizations as the targets for more principled explanation.[5] I also agree that the critics of GB that he mentions have contributed virtually nothing to our understanding of FL. I even believe that Chomsky’s efforts are an excellent illustration of what we should be aiming to do.  I just am not moved by the details of his effort.  But, really, that’s a small disagreement, among friends.






[1] Chomsky calls these ECP effects. I changed this to FSC effects to distinguish the subject/object asymmetry effects in the ECP from the argument/adjunct ones. Historically, the latter have proven more recalcitrant and were the ones that called forth the heavy ECP assumptions. Indeed, for some analyses, a good part of the subject/object asymmetries were relegated to head government effects (Rizzi, Aoun et al, Saito) more on the SM side of things than the CI.
[2] Transfer of the “phase feature” (PF) from C to T seems like a clear violation of the NTC. As stated, T that has no inherent PF receives one and thereby changes its grammatical powers. One can play around with definitions so that this violation of the NTC doesn’t count, but the definitions will not enjoy the conceptual naturalness that the conceptual versions of the SMT rely on.
[3] It seems that Chomsky is not that happy with T and its various properties.  He, in fact, suggests, that T has no properties of its own. It’s just there to receive properties from C. This makes T very similar to Agr in older MP, and recall that Chomsky argued that such grammar internal formatives are to be eschewed. They cause DP headaches, as do any grammar internal formatives.  The special properties of T are critical in Chomsky’s explanations of the EPP and FSC effects. But, this strikes me as a problem for his account. Why doesn’t explaining these special properties of FL in terms of special properties of T not amount to re-description rather than explanation? One of the salutary effects of MP has been to warn us about confusing the two. However, the more T is special, the more accounts of Spec-T effects (EPP and FSCs) are weakened. 
[4] And talking about memory load: what about the phase hood of D?  Chomsky really does not like to talk about D as a phase. He reluctantly mentions that it might be one from time to time, but it is reluctantly.  But if it is not a phase, then the memory demands on phases can be arbitrarily big given the recursive proclivities of DPs.  So, it would seem that on computational grounds, if v* is a phase then D should be one as well. Indeed, given that “clauses” are already bounded at C, why do we need to also computationally bound them at v*?  Remember, what constitutes a phase needs to be coded into FL and this causes problems for DP. So, all in all, we want the fewest phases we can get away with.
[5] I would go further: I think that these lectures constitute a bit of a departure in Chomsky’s minimalist style. The targets of explanation are now very broad generalizations that GG has established, the aim being to explain their properties.  Earlier efforts were far more local, the targets of explanation being Existential Constructions in Icelandic.  I think that this more general project gets the explanatory grain more correct.