Wednesday, January 24, 2018
Monday, January 22, 2018
Wednesday, January 10, 2018
Section 4 goes over what LUTG takes to be some core powers of human minds. This includes naïve theories of how the physical world functions and how animate agents operate. In addition, with a nod to Fodor, it outlines the critical role of compositionality in allowing for human cognitive productivity. It is a nice discussion and makes useful comments on causal models and their relation to generative ones (section 4.2.2).
Now let’s note a few eggshells. A central recurring feature of the LUTG discussion is the observation that it is unclear how or whether current DL approaches might integrate these necessary mechanisms. The paper does not come right out and say that DL models will have a hard time with these without radically changing its sub-symbolic associationist pattern matching monomania, but it strongly suggests this. Here’s a taste of this recurring theme (and please note the R overtones).
It is not hard to see from how LUTG makes its very reasonable case that it is a bit nervous about DL (the current star of AI). LUTG is rhetorically covering its posterior while (correctly) noting that unreconstructed DL will never make the grade. The same wariness makes it impossible for LUTG to acknowledge its great debt to R predecessors. As LUTG states, its “goal is to build on their [neural networks, NH] successes rather than dwell on their shortcomings” (2). But those that always look forward and never back won’t move forward particularly well either (think Obama and the financial crisis). Understanding that E is deeply inadequate is a prerequisite for moving forward. It is no service to be mealy-mouthed about this. One does not finesse one’s way around veery influential bad ideas.
Ok, so I have a few reservations about how LUTG makes its basic points. That said, this is a very useful paper. It is nice to see this coming out of the very influential Bayesian group at MIT and in a prominent place like B&BS. I am hoping that it indicates that the pendulum is swinging away from E and towards a more reasonable R conception of minds. As I’ve noted the analogies with standard GG practice is hard to miss. In addition LUTG rightly points to the shortcomings with connectionist/deep learning/neural net approaches to mental life. This is good. It may not be news to many of us, but if this signals a return to R conceptions of mind, it is a very positive step in the right direction.
Thursday, January 4, 2018
I have several motivations for writing these posts. First, writing them, and reading & replying to comments, really helps me sharpen my own thinking on the issues. (Whether I’m convincing anyone but myself is a separate matter, of course.) Additionally, though, it is my impression that when it comes to the syntax-semantics mapping, the working assumption that the mapping in question is transparent – a wholly legitimate research heuristic, of course – is in practice often elevated to the status of ontological principle. This, in turn, licenses potentially problematic inferences about syntax. And it is these cases that I wish to highlight.
I hasten to add that I’m not sure there’s anything different in kind here from what goes on in any other “interface” work. That is, I don’t mean to impugn syntax-semantics work in particular (as opposed to, say, syntax-morphology work or whatever else). It’s just that the particular syntax-semantics inferences I’m talking about are ones that I often bump up against in my own work, and I often get the feeling that they are accorded the status of “established truths” – which places the burden of proof on any proposal that would contradict them. It’s this view that I’d like to challenge here.
Finally, for interesting discussions pertaining to the substance of this post in particular, I’d like to thank Amy Rose Deal – who should not, of course, be held responsible for any of its contents; in fact I’m fairly sure she would disagree!
Okay, let’s get to it...
What is an “A-position”? Originally, the ‘A’ was supposed to be a mnemonic for “Argument” – the idea being that an A-position is any position that could, in principle, introduce arguments. A particular set of properties was then shown to correlate with being in, or moving to, an A-position. Most important for our current purposes are the binding-related ones: A-positions were the positions from which one could antecede novel binding dependencies. Hence the well-known kind of asymmetry between (1a) and (1b):
The first (here) discusses the complex ways that birds cooperate while singing to enhance their partner's responses. This is pretty sophisticated behavior and it strikes me as having a more than passing resemblance to turn taking activity in cooperative conversation. If this analogy is on the right track, then it is a case where something that we find in human language use has analogues in other species. Note, that so far as we can tell, cooperation of this sort does not endow the cooperators with anything like unbounded hierarchical syntax of the kind found in human language. Which just goes to show (if this were needed) that the fact that communication can be socially directed and involves cooperation does not suffice to explain its formal properties. I am sure you did not need reminding of this, though there are some who still suggest that ultimately such forms of cooperation will get one all the way to recursive syntax.
Here's another piece on plant cognition, this time decision making. Their strategic thinking is quite striking, with plants suiting their responses to the strategic options available to them. Their "behavior" is very context sensitive and it appears that they they maximize their access to light using several different strategies appropriately. How they do this is unclear, but that they do it seems well established. As Michael Gruntman, one of the researchers noted: "Such an ability to choose between different responses according to their outcome could be particularly important in heterogeneous environments, where plants can grow under neighbors with different size, age, or density, and should therefore be able to choose their appropriate strategy." And all without brains.
The third piece (here), is a spot by Gelman. It more or less speaks for itself but it useful makes the point again that stats without theory usually produces junk. We cannot repeat this often enough, especially given his observation that this message has not filtered through to the professionals that use the statistical machinery.