I hesitate to say ANYTHING in response to your last post because its positive tone leaves me glowing, wonderful if everyone thinks the same about our findings and our new work. But there are some foundational issues where you and I diverge and they're worth a good vetting, I think. I really was totally surprised at your equating slow/probabilistic/associationistic learning with "learning in general," i.e. that if some change in an organism's internal state of knowledge wasn't attributable to the action of this very mechanism, then it isn't learning at all, but "something else." My own view of what makes something learning is much more prosaic and excludes the identification of the mechanism: if you come to know something via information acquired from the outside, it's learning. So acquiring knowledge of Mickey Mantle's batting average in 1950 or finding out that the pronunciation of the concept 'dog' in English is "dog" both count as learning, to me. At the opposite extreme of course are changes due entirely, so far as we know, to internal forces, e.g., growing hair on your face or replacing your down with feathers (though let's not quibble, some minimal external conditions have to be met, even there). The in-between cases are the ones to which Plato drew our attention, and maybe Noam is most responsible in the modern era for reviving:
“But if the knowledge which we acquired before birth was lost by us at birth, and afterwards by the use of the senses we recovered that which we previously knew, will not that which we call learning be a process of recovering our knowledge, and may not this be rightly termed recollection?”
(Plato Phaedo [ca. 412 BCE])
I take UG to be a case, something (postulated) as an internal state, probably a logically necessary one, that is implicit in the organism before knowledge (say, information about English or Urdu) is obtained from the outside, and which guides and helps organize acquisition of that language-specific knowledge. I have always believed that most language acquisition is consequent on and derivative from this structured, preprogrammed, basis, yet something crucially comes from the outside and yields knowledge of English, above and beyond the framework principles and functions of UG. Syntactic bootstrapping, for example, is meant to be a theory describing how knowledge of word meaning is acquired within (and "because of") certain pre-existing representational principles, for example that -- globally speaking -- "things" surface as NP's (further divided into the animate and inanimate) and "events/states" as clauses, so the structure NP gorps that S would be licensed for "gorp" iff its semantics is to express a relation between a sentient being and an event, e.g., "knowing" but not "jumping."
The problems we have in mind to address in the recent work you've been discussing, to my delight, are: how do you ever discover where the NP's etc. are, in English? That its subjects precede its verbs, roughly speaking. This knowledge comes from outside (it is not true of all languages) and has to be acquired to concretize the domain-specific procedure in which you learn word meanings, in part at least, by examining the structures for which they're licensed. I have argued that, at earliest stages, you can't make contact with this preexisting knowledge just because you don't know, e.g. where the subject of the sentence is. To find out, you have to learn a few "seed words" via a domain-general procedure (available across all the species of animals we know perhaps barring the paramecia) . That procedure has almost always (since Hume, anyhow) been conceived as association (in its technical sense). As I keep mentioning, success in this procedure is vanishingly rare, it is horrible, because of the complexity and variability of the external world, though reined in to some extent by some (domain-specific) perceptual-conceptual biases (see Markman and Wachtel, inter alia). Apparently, you can only acquire a pitiful set of whole-object concrete concept labels by this technique. Though it is so restrictive, we take it as crucial: it is the only possibility that keeps "syntactic bootstrapping" from being absolutely circular, it provides the enabling data for SB, enough nouns to hint as to where the subject is, hence given this knowledge of "man" and "flower" and the observation of a man watering a flower, you learn not only the verb "water" but the fact that English is SVO.
So back to the point: I think word learning starts with a domain-general procedure that acquires "dog" by its cooccurrence with dog-sightings, given innate knowledge of the concept ‘dog,’ and one learns (yes) that English is SVO as a second step, and as above. This early procedure gives you concretized PS representations, domain-specific, language-specific ones, that allow you to infer "a verb with mental content" from appearance in certain clausal environments. That's my story.
What my argument with you, now, is about, is the contrapositive to Plato, I am asking: "If you have to have information from the outside, information as to the licensing conditions for "dog" (and, ultimately, "bark"), is acquiring that information not rightly termed learning?" I think it is. But the mechanism turns out (if we're right) to be more like brute-force triggering than by probabilistic compare-and-contrast across instances. The exciting thing, as you mention, is that others through the last century (e.g., Rock, Bower, several others) insisted that learning in quite different areas might work that way too. Most exciting I think is the work starting in the 1940's and continued in the exquisite experimental work of Randy Gallistel, showing that it is probably true of the wasps, the birds and the bees, the rats, as well, even learning stupid things in the laboratory (well, not Mickey Mantle's batting average, but, at least, where the food is hidden in the maze).