This is a reply to Alex's reply here. It did not fit into the limited space the comment section makes available. Sorry.
Sigh. Why the bait and switch there Alex? Where't this talk about categories coming from? But let me get there. It seems to me that you really don't get the argument, so let me illustrate it with an example you will be familiar with as I gave it to you before.
The question on the table is whether I am entitled to a domain specific UG built with largely domain general "circuits." Now, a priori this seems reasonable. I can build a "will read windows only" machine using the same chips that will build a "will read OS only" system. The very same chips can be used to exclusively read/use two different programming formats. It's not only disable, it has been done. So the conceptual possibility exists.
In the grammar domain: Say I can show how using Merge and other simple non linguistically proprietary operations I can "derive" the binding theory (I try to show this in 'A Theory of Syntax' but differently (and more idiosyncratically) than I sketch here). Here's the proposal:
(i) If A is antecedent to B then A and B form a constituent.
(ii) Merge in both E and I forms is a basic operation
(iii) Full interpretation holds: A DP must be interpretable at both interfaces, this means bears both a theta role and a case value.
(iv) There is no DS and so movement into theta positions is ok.
(v) Minimality holds of movement
(vi) Extension regulates Merge
The net effect of (i)-(v) is to have reflexivization "live on" A-chains. A-chain properties follow from minimality, Extension, and full interpretation. I take these latter two properties to reflect domain general/computationally general features of FL and so NOT special to FL (this may be wrong, but I argue for it, so let me get away with that here).
The effect of having reflexivization live on A-chains derives binding principle A (this is easy to see given the LGB relation between NP-trace and movement via binding theory. I reverse the relation relating them via movement theory). The locality follows from (iii) and (v). The C-command condition holds from (vi).
Say for purposes of discussion this indeed derives Principle A of the binding theory as I said. Now what does the kid have to learn to master principle A? Well all but the fact that 'himself' is spelled out as the tail of the chain is "given." So that's what the kid has to "learn," i.e. that reflexives are spell outs of A-chain tails (roughly the old Lees-Klima account in gussied up form). Note, as I indicated in a reply to an earlier question, this is all the kid has to learn on the GB theory as well (i.e. that reflexives fall under A). The same thing. This is not surprising as if successful we have derived principle A as the product of Merge plus these other principles. In other words, if I reduce Principle A to movement theory, then if FL is structured as the reducing picture envisages I am in the same position I was in wrt Plato's problem and Binding Theory as I was in the GB era. The answer to Plato's problem has not changed. The information is domain specific though the computational circuits used to build the FL circuit board that embodies the competence are largely domain general (i.e. circuits and properties available domain generally) in their properties and modes of operation.
Now, I am not saying that this is correct (though I do like it). I am asking ASSUMING IT OR SOMETHING LIKE IT CAN BE DONE whether the fact of a Minimalist reduction means that all learning is domain general and the answer I give is no if you see that the Minimalist proposal is not competitor to the GB one but an attempt to place it on more solid foundations. So there's my eaten cake and I plan another big helping.
Now your categorization question:MP (and GB for that matter) had very little to say about words and their categories (i.e. the generalizations adduced were not nearly as impressive as what we had to say about syntax IMHO). Thus, what I said did not address these questions. Truth be told, IMHO we know very little about the intricacies of word learning and the innate knowledge required to get it off the ground. Chomsky's discussion of these matters (riffing on Austin and the later Wittgenstein) is fascinating but so far theoretically inconclusive. So, the short answer is that NOTHING I KNOW ABOUT MP HAS ANYTHING ENLIGHTENING TO SAY ABOUT THIS. I also know that Chomsky believes the same thing. So, as far as I can tell, we have no answer to this question from an MP point of view. However, most arguments for rich UG were made using syntactic facts like those GB and MP do deal with so the fact that we have no MP story here strikes me as of little relevance.
In sum, what you are pointing out is that there are other important poorly understood questions. Yup, many. Do these require domain specific innate knowledge? Who knows? I am not being entirely flippant (though I am being a teensy bit). Here's why. MP makes sense because we have theories like GB. Till GB came up with its laws of grammar the question of how to reduce them to simple principles was way premature. Ok, what do we know about word learning and categorization that comes even close to being interesting. Not much. So the Minimalist question is entirely out of place. The thing about research questions is that they make sense in some areas and not in others. They make sense for syntax and so we are making some interesting progress in answering them there. I have no reason to think that they make sense for the problems you mention and so am not surprised that there is not much to say. Of course, should categorization and word acquisition be subject to domain general procedures, I would be delighted. If not, I would start to ask what makes it possible and how much domain specificity we need. But, till we have interesting "laws" here I will refrain from indulging minimalist confabulations.