Monday, January 22, 2018

The Gallistel-King conjecture; an update

As time goes by, bets against the veracity of the Gallistel-King conjecture (see here and here) are becoming longer and longer. Don’t get me wrong. The cog-neuro world is not about to give up on its love affair with connectionism. It’s just that as the months pass, the problems with this (sadly, hyper Empiricist) view of things becomes ever more evident and this readies people for a change. Moreover, as you can’t beat something with nothing but a promise of something (you actually need a concrete something), it is heartening to see that the idea of classical computation within the neuron/cell is becoming ever more conventional. Here is a recent report that shows how far things have come.

It shows how living cells can classically compute, in the sense of programmable circuits (“predictable and programmable RNA-RNA interactions”), which “resemble” conventional electronic circuits” with the added feature that they “self-assemble” within cells “sense incoming messages and respond to them by producing a particular computational output.” Furthermore, “these switches can be combined…to produce more complex logic gates capable of evaluating and responding to multiple outputs, just like a computer may take several variables and perform sequential operations like addition and subtraction in order to reach a final result.” Recall, that as Gallistel has long argued, being able to compute a number and store it and use it for further computation is precisely the kind of neural computation we need to be cognitively adequate. We now know that cells have the chemical wherewithal to accomplish this using little RNA circuits, and that this is actually quite easy for the cell to do (“The RNA-only approach to producing cellular nanodevices is a significant advance, as earlier efforts required the use of complex intermediaries, like proteins”) reliably.

So, the idea that cells can classically compute is true. It would be surprising if evolution developed an entirely novel computational procedure instead of exploiting the computational potential of ready available ones to get our cognitive capacities off the ground. This is possible (of course) but seems like a weird way to proceed if the ingredients for a standard kind of computation (symbolic) are already there for the taking. This is the point of the Gallistel-King conjecture, and to me, it seems like a very good one.

Wednesday, January 10, 2018

An update on Empiricism vs Rationalism debates

All papers involve at least two levels of discourse. The first comprises the thesis: what is being argued? What evidence is there for what is being argued? What’s the point of what is being argued? What follows from what is being argued? The second is less a matter of content than of style: how does the paper say what it says? This latter dimension most often reveals the (often more obscure) political dimensions of the discipline, and relates to topics like the following: What parts of one’s butt is it necessary to cover so as to avoid annoying the powers that be? Which powers is it necessary to flatter in order to get a hearing or to avoid the ruinous “high standards of scrutiny” that can always be deployed to block publication? Whose work must get cited and whose can be safely ignored? What are the parameters of the discipline’s Overton Window? If you’ve ever been lucky enough to promote an unfashionable position, you have developed a sensitivity to these “howish” concerns. And if you have not, you should. They are very important. Nothing reveals the core shibboleths (and concomitant power structures) of a discipline more usefully than a forensic inquiry into the nature of the eggshells upon which a paper is treading.  In what follows I would like to do a little eggshell inspection in discussing a pretty good paper from a (for me) unexpected source (a bunch of Bayesians). The authors are Lake, Ullman, Tenenbaum and Gersham (LUTG) all from the BCS group at MIT. The paper (here) is about the next steps forward for an AI that aspires to be cognitively interesting (like the AI of old), and maybe even technologically cutting edge, (though LUTG makes this second point very gingerly).

But that is not all the paper is about. LUTG is also a useful addition to the discussions about Empiricism (E) vs Rationalism (R) in the study of mind, though LUTG does not put matters in quite this way (IMO, for quasi-political reasons, recall “howish” concerns!). To locate the LUTG on this axis will require some work on my part. Here goes.

As I’ve noted before, E and R have both a metaphysical and an epistemological side. Epistemologically, E takes minds to be initially (relatively) unstructured, with mental contour forged over time as a by-product of environmental input as sampled by the senses.  Minds on this view come to represent their environments more or less by copying their structures as manifest in the sensory input.  Successful minds are ones that effectively (faithfully and efficiently) copy the sensory input. As Gallistel & Matzel put it, for Es minds are “recapitulative,” their function being to reliably take “an input that is part of the training input, or similar to it, [to] evoke[s] the trained output, or an output similar to it” (see here for discussion and links). Another way of putting this point is that E minds are pattern matchers, whose job is to track the patternings evident in the input (see here). Pattern matching is recapitulative and E relies on the idea that cognition amounts to tracking the patternings in the sensory input to yield reliable patterns.

Coupled with this E theory of mind comes a metaphysics. Reality has a relatively flat causal structure. There is no rich hierarchy of causal mechanisms whose complex interactions lie behind what we experience. Rather, what you sense is all there is. Therefore, effectively tracking and cataloguing this experience and setting up the right I/O associations suffices to get at a decent representation of what exists. There is no hidden causal iceberg of which this is the sensory/perceptual tip. The I/O relations are all there is. Not surprisingly (this is what philosophers are paid for), the epistemology and the metaphysics fit snugly together.

R combines its epistemology and its metaphysics differently. Epistemologically, R regards sensory innocent minds to be highly structured. The structure is there to allow minds to use sensory/perceptual information to suss out the (largely) hidden causal structures that produce these sensations/perceptions. As Gallistel & Matzel put it R minds are more like information processing systems structured to enable minds to extract information about the unperceived causal structures of the environment that generate the observed patterns of sensation/perception. On the R view, minds sample the available sensory input to construct causal models of the world which generate the perceived sensory patterns. And in order to do this, minds come richly stocked with the cognitive wherewithal necessary to build such models. R epistemology takes it for granted that what one perceives vastly underdetermines what there is and takes it to be impossible to generate models of what there is from sensory perception without a big boost from given (i.e. unlearned) mental structures that make possible the relevant induction to the underlying causal mechanisms/models.

R metaphysics complements this epistemological view by assuming that the world is richly structured causally and that what we sense/perceive is a complex interaction effect of these more basic complexly interacting causal mechanisms. There are hidden powers etc. that lie behind what we have sensory access to and that in no way “resembles” (or “recapitulates”) the observables (see here for discussion).

Why do I rehearse these points yet again? Because I want to highlight a key feature of the E/R dynamic: what distinguishes E from R is not whether pattern matching is a legit mental operation (both views agree that it is (or at least, can be)). What distinguishes them is the Es think that this is all there is (and all there needs to be), while Rs reserve an important place for another kind of mental process, one that builds models of the underlying complex non visible causal systems that generate these patterns. In other words, what distinguishes E from R is the belief that there is more to mental life than tracking the statistical patterns of the inputs. R doesn’t deny that this is operative. R denies that this suffices. There needs to me more, a lot more.[1]

Given this backdrop, it is clear that GG is very much in the R tradition. The basic observation is that human linguistic facility requires knowledge a G (a set of recursive rules). Gs are not perceptually visible (though their products often are). Further, cursory inspection of natural languages indicates that human Gs are quite complex and cannot be acquired solely by induction (even sophisticate discovery procedures that allowed for inductions over inductions over inductions…). Acquiring Gs requires some mental pre-packaging (aka, UG) that enables an LAD to construct a G on the basis of simple haphazard bits of G output (aka, PLD). Once acquired, Gs can be used to execute many different kinds of linguistic behaviors, including parsing and producing novel sentences and phrases. That’s the GG conceit in a small nutshell: human linguistic facility implicates Gs which implicates UGs, both G and UG being systems of knowledge that can be put to multiple uses, including G acquisition, and the production and comprehension of an unbounded number of very different novel linguistic expressions within a given natural language. G, UG, competence vs performance: the hallmarks of a R theory of linguistic minds.

LUTG makes effectively these points but without much discussing the language case (it does at the end but mainly to say that it won’t discuss it).  LUTG’s central critical claim is that human cognition relies on more than pattern matching. There is, in addition, model building, which relies on two kinds of given mental contents to guide learning.  The first kind, what LUTG calls “core” (and an R would call “given”) involves substantive specific content (e.g. naïve physics and psychology). The second involves formal properties (e.g. compositionality and certain kinds of formal analogy (what LUTG calls “learning-to-learn”).[2] LUTG notes that these two features pre-condition further learning and are indispensible. LUTG further notes that the two mental powers (i.e. model building and pattern recognition) can likely be fruitfully combined, though it (rightly, IMO) insists that model building is the central cognitive operation. In other words, LUTG recapitulates the observations above that R theories can incorporate E mechanisms and that E mechanisms alone are insufficient to model human cognition and that without the part left out, E pattern matching models are poor fits with what we have tons of evidence to be central features of cognitive life. Here’s how the abstract puts it:

We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn and how they learn it. Specifically, we argue that these machines should (1) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (2) ground learning in intuitive theories of physics and psychology to support and enrich the knowledge that is learned; and (3) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes toward these goals that can combine the strengths of recent neural network advances with more structured cognitive models.

The paper usefully goes over these points in some detail.  It further notes that this conception of the learning (I prefer the term “acquisition” myself) problem in cognition challenges associationist assumptions which LUTG observes (again rightly IMO) is characteristic of most work in connectionism, machine learning and deep learning. LUTG also points to ways that “model free methods” (aka pattern matching algorithms) might usefully supplement the model building cognitive basics to improve performance and implementation of model based knowledge (see 1.2 and 4.3.2).[3]

Section 3 is the heart of the critique of contemporary AI, which largely ignores the (R) model building ethos that LUTG champions. As it observes, the main fact about humans when compared with current AI models is that humans “learn a lot more from a lot less” (6). How much more? Well, human learning is very flexible and rarely tied to the specifics of the learning situation. LUTG provides a nice discussion of this in the domain of learning written characters. Further, human learning generally requires very little “input.” Often, a couple of minutes or exposures to the relevant stimulus is more than enough. In fact, in contrast to current machine learning or DL systems humans do not need to have the input curated and organized (or, as Geoff Hinton recently put it (here): humans, as opposed to DLers, “clearly don’t need all the labeled data.”). And why not? Because unlike what DLers and connectionists and associationists have forever assumed, human minds (and hence brains) are not E devices but R ones. Or as LUTG puts it (9):

People never start completely from scratch, or even close to “from scratch,” and that is the secret to their success. The challenge of building models of human learning and thinking then becomes: How do we bring to bear rich prior knowledge to learn new tasks and solve new problems so quickly? What form does that prior knowledge take, and how is it constructed, from some combination of inbuilt capacities and previous experience?

In short, the blank tablet assumption endemic to E conceptions of mind is fundamentally misguided and a (the?) central problem of cognitive psychology is to figure out what is given and how this given stuff is used. Amen (and about time)!
Section 4 goes over what LUTG takes to be some core powers of human minds. This includes naïve theories of how the physical world functions and how animate agents operate. In addition, with a nod to Fodor, it outlines the critical role of compositionality in allowing for human cognitive productivity.[4] It is a nice discussion and makes useful comments on causal models and their relation to generative ones (section 4.2.2).
Now let’s note a few eggshells. A central recurring feature of the LUTG discussion is the observation that it is unclear how or whether current DL approaches might integrate these necessary mechanisms. The paper does not come right out and say that DL models will have a hard time with these without radically changing its sub-symbolic associationist pattern matching monomania, but it strongly suggests this. Here’s a taste of this recurring theme (and please note the R overtones).

As in the case of intuitive physics, the success that generic networks will have in capturing intuitive psychological reasoning will depend in part on the representations humans use. Although deep networks have not yet been applied to scenarios involving theory of mind and intuitive psychology, they could probably learn visual cues, heuristics, and summary statistics of a scene that happens to involve agents. If that is all that underlies human psychological reasoning, a data-driven deep learning approach can likely find success in this domain.
However, it seems to us that any full formal account of intuitive psychological reasoning needs to include representations of agency, goals, efficiency, and reciprocal relations. As with objects and forces, it is unclear whether a complete representation of these concepts (agents, goals, etc.) could emerge from deep neural networks trained in a purely predictive capacity. Similar to the intuitive physics domain, it is possible that with a tremendous number of training trajectories in a variety of scenarios, deep learning techniques could approximate the reasoning found in infancy even without learning anything about goal- directed or socially directed behavior more generally. But this is also unlikely to resemble how humans learn, understand, and apply intuitive psychology unless the concepts are genuine. In the same way that altering the setting of a scene or the target of inference in a physics-related task may be difficult to generalize without an understanding of objects, altering the setting of an agent or their goals and beliefs is difficult to reason about without understanding intuitive psychology.

Yup. Right on. But why so tentative? “It is unclear?” Nope it is very clear, and has been for about 300 years since these issues were first extensively discussed. And we all know why. Because concepts like “agent,” “goal,” “cause,” “force,” are not observables and not reducible to observables. So, if they play central roles in our cognitive models then pattern matching algorithms won’t suffice. But as this is all that E DL systems countenance, then DL is not enough. This is the correct point, and LUTG makes it. But note the hesitancy with which it does so. Is it too much to think that this likely reflects the issue mooted at the outset about the implicit politics that one’s dance over the eggshells reveals.
It is not hard to see from how LUTG makes its very reasonable case that it is a bit nervous about DL (the current star of AI). LUTG is rhetorically covering its posterior while (correctly) noting that unreconstructed DL will never make the grade. The same wariness makes it impossible for LUTG to acknowledge its great debt to R predecessors.[5] As LUTG states, its “goal is to build on their [neural networks, NH] successes rather than dwell on their shortcomings” (2). But those that always look forward and never back won’t move forward particularly well either (think Obama and the financial crisis). Understanding that E is deeply inadequate is a prerequisite for moving forward. It is no service to be mealy-mouthed about this. One does not finesse one’s way around veery influential bad ideas.
Ok, so I have a few reservations about how LUTG makes its basic points. That said, this is a very useful paper. It is nice to see this coming out of the very influential Bayesian group at MIT and in a prominent place like B&BS. I am hoping that it indicates that the pendulum is swinging away from E and towards a more reasonable R conception of minds. As I’ve noted the analogies with standard GG practice is hard to miss. In addition LUTG rightly points to the shortcomings with connectionist/deep learning/neural net approaches to mental life. This is good. It may not be news to many of us, but if this signals a return to R conceptions of mind, it is a very positive step in the right direction.[6]

[1] R often goes farther: that even tracking the relevant perceptual regularities requires lots of given mental baggage. The world does not come pre-labeled. So zeroing in on the relevant dimensions for inductive generalization itself requires lots of pre-packaged “knowledge.” Geoff Hinton (the daddy of deep learning) seems to have come to a similar view of late concerning how hard it is to get things off the ground without curated data. See below for a reference.
[2] GGers should find this reminiscent of Chomsky’s discussion in Aspects of substantive and formal universals and how these interact to create Gs (models) of the ambient degenerate and deficient linguistic input available to the child (PLD).
[3] Or to put this in GG friendly terms; LUTG resurrects something very like a competence/performance distinction, model building being analogous to the former and model free methods applied to these models being analogous to the latter. An idea I found interesting is that given models of competence provides a useful domain for performance models wherein sophisticated pattern matching algorithms do their work. Conceptually, this idea seems very similar to what Charles Yang has been advocating as well (see here).
[4] Actually, though compositionality is critical to productivity, it does not suffice for generating an “infinite number of thoughts” or using an “infinite number of sentences”  “from a finite set if primitives” (14). For this we need more than compositionality, we need recursion as well.
[5] It is amazing, IMO, how small a role Chomsky plays in LUTG’s discussion. So far as I can tell, all of its major points were developed in the modern period by him. I am pretty sure that some of the authors know this but that highlighting this fact would hurt the paper politically by offending the relevant leading DL lights.
[6] BTW, LUTG has a nice discussion of the biological “plausibility” of neural net models of the brain. The short point is that short of being pictorially suggestive, there is no real reason for thinking that brains are like connectionist nets. As LUTG puts it (20):

Many seemingly well-accepted ideas concerning neural computation are in fact biologically dubious, or uncertain at best…

For example, most neural networks use some form of gradient based (e.g. back propogation) or Hebbian learning. It has long been argued, however, that
backpropagation is not biologically plausible. As Crick (1989) famously pointed out, backpropagation seems to rquire that information be transmitted backward along the axon, which does not fit with realistic models of neuronal function…

As LUTG observes, this should not in and of itself stop people from investigation neural nets as possible models of brain computation, but it should put an end to the prejudice that brains are nets because they look net-like. Sheesh! 

Thursday, January 4, 2018

Phi features, binding, and A-positions


This post continues a theme started here and here. Broadly, this series of posts is an attempt to highlight the daylight that exists between syntax and semantics.

I have several motivations for writing these posts. First, writing them, and reading & replying to comments, really helps me sharpen my own thinking on the issues. (Whether I’m convincing anyone but myself is a separate matter, of course.) Additionally, though, it is my impression that when it comes to the syntax-semantics mapping, the working assumption that the mapping in question is transparent – a wholly legitimate research heuristic, of course – is in practice often elevated to the status of ontological principle. This, in turn, licenses potentially problematic inferences about syntax. And it is these cases that I wish to highlight.

I hasten to add that I’m not sure there’s anything different in kind here from what goes on in any other “interface” work. That is, I don’t mean to impugn syntax-semantics work in particular (as opposed to, say, syntax-morphology work or whatever else). It’s just that the particular syntax-semantics inferences I’m talking about are ones that I often bump up against in my own work, and I often get the feeling that they are accorded the status of “established truths” – which places the burden of proof on any proposal that would contradict them. It’s this view that I’d like to challenge here.

Finally, for interesting discussions pertaining to the substance of this post in particular, I’d like to thank Amy Rose Deal – who should not, of course, be held responsible for any of its contents; in fact I’m fairly sure she would disagree!

Okay, let’s get to it...


What is an “A-position”? Originally, the ‘A’ was supposed to be a mnemonic for “Argument” – the idea being that an A-position is any position that could, in principle, introduce arguments. A particular set of properties was then shown to correlate with being in, or moving to, an A-position. Most important for our current purposes are the binding-related ones: A-positions were the positions from which one could antecede novel binding dependencies. Hence the well-known kind of asymmetry between (1a) and (1b):

Some distractions

Here are three very short pieces that might amuse you.

The first (here) discusses the complex ways that birds cooperate while singing to enhance their partner's responses. This is pretty sophisticated behavior and it strikes me as having a more than passing resemblance to turn taking activity in cooperative conversation. If this analogy is on the right track, then it is a case where something that we find in human language use has analogues in other species. Note, that so far as we can tell, cooperation of this sort does not endow the cooperators with anything like unbounded hierarchical syntax of the kind found in human language. Which just goes to show (if this were needed) that the fact that communication can be socially directed and involves cooperation does not suffice to explain its formal properties. I am sure you did not need reminding of this, though there are some who still suggest that ultimately such forms of cooperation will get one all the way to recursive syntax.

Here's another piece on plant cognition, this time decision making. Their strategic thinking is quite striking, with plants suiting their responses to the strategic options available to them. Their "behavior" is very context sensitive and it appears that they they maximize their access to light using several different strategies appropriately. How they do this is unclear, but that they do it seems well established. As Michael Gruntman, one of the researchers noted: "Such an ability to choose between different responses according to their outcome could be particularly important in heterogeneous environments, where plants can grow under neighbors with different size, age, or density, and should therefore be able to choose their appropriate strategy." And all without brains.

The third piece (here), is a spot by Gelman. It more or less speaks for itself but it useful makes the point again that stats without theory usually produces junk. We cannot repeat this often enough, especially given his observation that this message has not filtered through to the professionals that use the statistical machinery.

Thursday, December 28, 2017

The cog-neuro of plants

There are many (e.g. Erich Jarvis) who think that the basic hierarchical properties of language are direct reflections of its vocalic expression. This is what makes ASL and other signed languages so useful. The fact that they exist and that modulo their manual articulation they look, so far as we can tell, just like any other language (see here for discussion) puts paid (and paid in full!) to any simple minded idea that linguistic structure is “just” a reflection of their oral articulation.

Why do I mention this? Because I have just been reading some popular pieces about a potentially analogous case in neuroscience. Let me explain.

Several years ago I read a piece on plant “neurobiology” in the New Yorker penned by Michael Pollan (MP) (here). The source of the quotes around ‘neurobiology’ is a central concern of the article in that it explores whether or not it is appropriate  to allow that plants may have a cognitive life (‘bullshit’ is one of the terms tossed around) or whether this is just another case of metaphors run amok. One large influential group of critics thought the idea obscurantist bordering on the incoherent (see here). Here’s a quote from the MP piece quoting one of the critics Lincoln Taiz:

Taiz says that the writings of plant neurobiologists suffer from “over-interpretation of the data, teleology, anthropomorphizing, philosophizing and wild speculations. (6)

Wow, this is really bad, as you can tell when the charge is “philosophizing”!!! Can’t have any of that. At any rate, the main reason for Taiz’s conclusion, it appears, is that plants do not have brains (an uncontroversial point) or neurons (ditto). And if one further assumes that brains with neurons are required for cognition of any kind then the idea that plants might have memory, might use representations, learn and make context sensitive judgments is simply a category mistake. Hence the heat in the above quote.

In a recent Aeon essay, Laura Ruggles (LR) reprises the issues surrounding thinking plants (here). It appears that things are still contentious. This does not really surprise me. After all, the idea that plants cognize really is a weird and wonderful suggestion. So, that it could be false or, at the least, ill supported, strikes me as very plausible. However, as the quote above indicates, this is not the nature of the criticism. The objection is not that the evidence is weak but that the very idea is incoherent. It is not false. It is BS. It is not even high class BS, but a simple category mistake due to bad anthropomorphic philosophical speculation generated by teleologically addled minds. Needless to say, I got interested.

Why the heat? Because it directly challenges the reductive neurocentric conception of cognition that animates most of contemporary cog-neuro (just as ASL challenges the vocalic conception of grammar). And it does so in two ways: (i) It reflects the strong commitment of the methodology of “neuron doctrine” in cog-neuro and (ii) It reflects the strong commitment to the idea that biological memory and computation supervenes on a connectionist architecture (i.e. the relevant computations are inter-neural rather than intra-neural). Let me say a word or two about each point.

The “neuron doctrine” is the idea “that cognitive activity can be accounted for exclusively by basic neuroscience. Neuronal structure and function, as identified by neurophysioplogy, neuroanatomy and neurochemistry, furnish us with all we need to appraise the animal mind/brain complex” (see here: 208). This idea should sound familiar because it is the same one that we discussed in the previous post (here). The position endorses reductionist methodology to the study of the brain (more accurately, a methodological monism), one that sees no fruitful contribution from the mental sciences to cog-neuro. KGG-MMP rehearses the arguments against this rather silly view, but it clearly has staying power. Marr fought it in the 1980s and his proposal that problems in cog-neuro need to be tackled at (at least) three different levels (computational, algorithmic/representational, implementational), which interact but are distinct and provide different kinds of explanatory power is a response to precisely this view. This view, to repeat, was prevalent in his day (well over 30 years ago!) and is still going strong, as witnessed by the fact that KGG-MMP felt the need to reiterate his basic arguments yet again.

The plant “neuroscience” debate is a manifestation of the same methodological dogmatism, the position that takes it for granted that we once we understand neurons, we will understand thought. But it is even more of a challenge to this neurocentric view. If plants can be said to cognize (have representations, memories, process information, learn) then not only is the methodological thesis inappropriate, but the idea that cognition reduces to (exclusively lives on) neuronal structure, is wrong as well (again, think ASL and vocalization wrt grammar). If the plant people are onto something then the having memories is independent of having brains, as is learning and representation and who knows what else. So if the plant people are right, then not only is the neuron doctrine bad methodology, it is also ontologically inadequate.

None of this should be surprising if you have any functionalist sympathies. As Marr noted, the materials that make up a chess board/pieces can vary arbitrarily (wood, marble, bread, papier mache) and the game remains the same (the rules/game of chess is readically independent of the physical make-up of the board and pieces). Whether the relation of cognition to brains is more like the chess case or less is an open question. One view (Searle comes to mind) is that no brains, no cognition. On this view, the connection between brain structure and cognition is particularly tight in that the former is a necessary feature of the latter (thinking can only live in brains (though, IMO, there is more than a touch of mystical vitalism in Searle’s position)). If the plant cognitivists are right, then this is simply incorrect.[1]

In sum, though metaphysical reduction does not lend credence to methodological reduction, if even the former is untenable, then it is quite implausible that the former can stand. Why import neuronal methodological dicta in the study of cognition if cognitive machinery need not live in neuronal wetware?

The neuron doctrine has a more specific expression in todays cog-neuro. It’s the claim that the brain is basically a complex neural net and that memory, learning, cognition are products of such neural nets. In other words, the prevalent view in contemporary cog-neuro is that cognition is an inter-neuronal phenomenon not an intra-neuronal one. Brains are the locus of cognition because it brains have inter-neuronal connections. Memories, for example, are expressed in connection weights and learning amounts to adjusting these inter-neuronal weights. The plant stuff challenges this view as well. How so? Because if plants to cognize they seem to do it without anything analogous to a brain (more exactly, this assumption is common ground in the discussion). So, if plants have memories then it looks like they encode these within cells. LR mentions epigenetic memory as a possible memory substrate. These involve “chromatin marks,” which “are proteins and small chemical groups that attach to DNA within cells and influence gene activity” (LR: 3). This mechanism within cells suffices to physically implement “memory.” And if this is so, then it would provide evidence for the Gallistel-King conjecture that memories can be stored biochemically within cells. Or to state this more carefully: if plants can code memories in this way, why not us too and maybe neuronal connectionism is just a wrong-headed assumption, as Gallistel has been arguing for a while. Here is MP making this point:

How plants do without a brain…raises questions about how our brains do what they do. When I asked Mancuso about the function and location of memory in plants, he…reminded me that mystery still surrounds where and how our memories are stored: “it could be the same kind of machinery, and figuring it out in plants may help us figure it out in humans.” (MP:19)


Ok, there is lots of fun stuff in these essays. It is fun to see how plant people go about arguing for mental capacities in plants. There are nice discussions of experiments that appear to show that plants can “habituate” to stimuli (they pretend-drop plants and see how they react), can “learn” new associations (use wind as conditioned stimulus for light) among stimuli, and can to anticipate what will happen (where sun will be tomorrow) in the absence of the thing being anticipated (in the absence of input from the sun), which suggest that plants can represent the trajectory of the sun. Is this “true” and do plants cognize? I have no idea. But an a priori denial that it is possible is based on conceptions of what proper cog-neuro is that we have every reason to reject.

[1] So too if machines can cognize (Searle’s target), something that seems less challenging for some reason than that plants do. There is some nice speculation in the MP article as to why this might be the case.

Wednesday, December 20, 2017

More on modern university life

Universities are spending more and more money on administrative staff. Here is a post with references to more in depth material that puts some numbers to this process. Administration is eating up all the revenue and it is growing faster than any other part of the university. Three points of interest in the post: first, faculty positions have risen in line with student numbers (56% rise in students and 51% rise in faculty). The out of proportion rise lies with administrators and their staffs. It has exploded. Second, this trend is bigger in private universities than public ones. The post notes that this "looks to be the opposite of what we would expect if it were public mandates lying behind this [i.e. rise in bureaucrats, NH] trend. Third, this really is a new trend. Universities are changing. As The post notes: the "good old days" top admins tended to be more senior faculty with reasonably distinguished records who had been on campus for a long time and knew the people and the place. Now we have undistinguished professional managers...
I don't know about where you are, but this seems to pretty well sunup the state of play at those institutions that I am acquainted with (like my own).

Monday, December 11, 2017

How to study brains and minds

There is currently a fight going on in cog-neuro whose outcome GGers should care about. It is illuminatingly discussed in a recent paper by Krakauer, Ghazanfar, Gomez-Marin, MacIver and Poeppel (KGG-MMP) (here). The fight is about how to investigate the mind/brain connection. There are two positions. One, which I will call the “Wrong View” (WV) just to have a useful mnemonic, takes a thoroughly reductionist approach to the problem. The idea is that a full understanding of brain function will follow from a detailed understanding of “their component parts and molecular machinery” (480). The contrary view, which I dub the “Right View” (RV) (again, just to have a name),[1] thinks that reductionism will not get nearly as far as we need to go and that the only way to get a full understanding of how brains contribute to thinking/feeling/etc. requires neural implementations in tandem with (and more likely subsequent to) “careful theoretical and experimental decomposition of behavior.” More specifically, “the detailed analysis of tasks and of the behavior they elicit is best suited for discovering component processes and their underlying algorithms. In most cases,…the study of the neural implementation of behavior is best investigated after such behavioral work” (480). In other words, WV and RV differ not over the end game (an understanding of how the brain subvenes the brain mechanisms relevant to behavior) but the best route to that end. WV thinks that if you take care of the neuronal pennies, the cognitive dollars will take care of themselves. The RV thinks that doing so will inevitably miss the cognitive forest for the neural trees and might in fact even obscure the function of the neural trees in the cognitive forest. (God I love to mix metaphors!!). Of course, RV is right and WV is wrong. I would like to review some of the points KGG-MMP makes arguing this. However, take a look for yourself. The paper is very accessible and worth thinking about more carefully.

Here are some points that I found illuminating (along with some points of picky disagreement (or, how I would have put things differently)).

First, framing the issue as one of “reductionism” confuses matters. The issue is less reduction than it is a neurocentric myopia. The problem KGG-MMP identifies revolves around the narrow methods standard practice deploys not the ultimate metaphysics that it endorses. In other words, even if there is, ontologically speaking, nothing more than “neurons” and their interactions,[2] discovering what these interactions are and how they combine to yield the observed mental life will require well developed theories of this mental life expressed in mentalistic non-neural terms. The problem then with standard practice is not its reduction but its methodological myopia. And KGG-MMP recognizes this. The paper ends with an appeal for a more “pluralistic” neuroscience, not an anti-reductionist one.

Second, KGG-MMP gives a nice sketch of how WV has become so prevalent. It provides a couple of reasons. First, has been the tremendous success of “technique driven neuroscience” (481). There can be no doubt that there has been an impressive improvement in the technology available to study the brain at the neuronal level. New and better machines, new and better computing systems, new and better maps of where things are happening. Put these all together and it is almost irresistible to grab for the low hanging fruit that such techniques bring into focus. Nor, indeed should this urge be resisted. What needs resisting is the conclusion that because these sorts of data can be productively gathered and analyzed that these data suffice to answer the fundamental questions.

KGG-MMP traces the problem to a dictum of Monod’s: “what is true of the bacterium is true of the elephant.” KGG-MMP claims that this has been understood within cog-neuro as claiming that “what is true for the circuit is true for the behavior” and thus that “molecular biology and its techniques should serve as the model of understanding in neuroscience” (481).

This really is a pretty poor form of argument. It effectively denies the possibility of emergence. Here’s Martin Reese (here) making the obvious point:

Macroscopic systems that contain huge numbers of particles manifest ‘emergent’ properties that are best understood in terms of new, irreducible concepts appropriate to the level of the system. Valency, gastrulation (when cells begin to differentiate in embryonic development), imprinting, and natural selection are all examples. Even a phenomenon as unmysterious as the flow of water in pipes or rivers is better understood in terms of viscosity and turbulence, rather than atom-by-atom interactions. Specialists in fluid mechanics don’t care that water is made up of H2O molecules; they can understand how waves break and what makes a stream turn choppy only because they envisage liquid as a continuum.

Single molecules of H2O do not flow. If one is interested in fluid mechanics then understanding will come only by going beyond the level of the single molecule or atom. Similary if one is interested in the brain mechanisms underlying cognition or behavior then it is very likely that we will need to know a lot about how groups of fundamental neural elements interact, not just how one does what it does. So just as a single bird doesn’t flock, nor a single water molecule flow, nor a single gastric cell digest, so neither does a single brain particle (e.g. neuron) think. We will need more.

Before I get to what more, I should add here that I don’t actually think that Mondo meant what KGG-MMP take him to have meant. What Monod meant was that the principles of biology that one finds in the bacterium are the same as those that we find in the elephant. There is little reason to suppose, he suggested, that what makes elephants different from bacteria lies in their smallest parts respecting different physical laws. It’s not as if we expect the biochemistry to change. What KGG-MMP and Reese observe is that this does not mean that all is explained by just understanding how the fundamental parts work. This is correct, even if Monod’s claim is also correct.

Let me put this another way: what we want are explanations. And explanations of macro phenomena (e.g. flight, cognition) seldom reduce to properties of the basic parts. We can completely understand how these work without having the slightest insight into why the macro system has the features it does. Here is Reese again on reduction in physics:

So reductionism is true in a sense [roughly Monod’ sense, NH]. But it’s seldom true in a useful sense. Only about 1 per cent of scientists are particle physicists or cosmologists. The other 99 per cent work on ‘higher’ levels of the hierarchy. They’re held up by the complexity of their subject, not by any deficiencies in our understanding of subnuclear physics.

So, even given the utility of understanding the brain at the molecular level (and nobody denies that this is useful), we need more than WV allows for. We need a way of mapping two different levels of description onto one another. In other words, we need to solve what Embick and Poeppel have called the “granularity mismatch problem” (see here). And for this we need to find a way of matching up behavioral descriptions with neural ones. And this requires “fine grained” behavioral theories that limn mental mechanisms (“component parts and sub-routinges”) as finely as neural accounts describe brain mechanisms. Sadly, as KGG-MMP notes, behavioral investigation “has increasingly been marginalized or at best postponed” (481-2), and this has made moving beyond the WV difficult. Rectifying this requires treating behavior “as a foundational phenomenon in its own right” (482).[3]

Here is one more quibble before going forward. I am not really fond of the term ‘behavioral.’ What we want is a way of matching up cognitive mechanisms with neural ones. We are not really interested in explaining actual behavior but in explaining the causal springs and mechanisms that produce behavior. Focusing on behavior leads to competence/performance confusions that are always best avoided. That said, the term seems embedded in the cog-neuro literature (no doubt a legacy of psychology’s earlier disreputable behaviorist past) and cannot be easily dislodged. What KGG-MMP intends is that we should look for mental models and use these to explore neural models that realize these mental systems. Of course, we assume that mental systems yield behaviors in specific circumstances, but like all good scientific theories, the goal is to expose the mental causes behind the specific behavior and it is these mental causal factors whose brain realization we are interested in understanding.  The examples KGG-MMP gives show that this is the intended point.

Third, KGG-MMP nicely isolates why neuroscience needs mental models. Or as KGG-MMP puts is: “Why is it the case that explanations of experiments at the neural level are dependent on higher level vocabulary and concepts?” Because “this dependency is intrinsic to the very concept of a “mechanism”.” The crucial observation is that “the components of a mechanism do different things than the mechanism organized as a whole” (485). As Marr noted, feathers are part of the bird flight mechanism, but feathers don’t fly. To understand how birds fly requires more than a careful description of their feathers. So too with neurons.

Put another way, as mental life (and so behavior) is an emergent property of neurons how neurons subvene mental processes will not be readily apparent by only studying neural properties singularly or collectively.

Fourth, KGG-MMP gives several nice concrete examples of fruitful interactions between mental and neural accounts. I do not review them here save to say that sound localization in barn owls makes its usual grand appearance. However, KGG-MMP provides several other examples as well and it is always useful to have a bunch of these available on hand.

Last, KGG-MMP got me thinking about how GGish work intersects with the neuro concerns the paper raises, in particular minimalism and its potential impact for neuroscience. I have suggested elsewhere (e.g. here) that MP finally offers a way of bridging the granularity gap that Embick and Poeppel. The problem as they saw it, was that the primitives GGers were comfortable with (binding, movement, c-command) did not map well to primitives neuro types were comfortable with. If, as KGG-MMP suggests, we take the notion of the “circuit” as the key bridging notion, the problem with GG was that it did not identify anything simple enough to be a plausible correlate to a neural circuit. Another way of saying this is that theories like GB (though very useful) did not “dissect [linguistic, NH] behavior into its component parts or subroutines” (481). It did not carve linguistic capacity at its joints. What minimalism offers is a way of breaking GB parts down into simpler subcomponents. Reducing macro GB properties to products of simple operations like  Merge or Agree or Check Feature promises to provide mental parts simple enough to be neurally interpretable. As KGG-MMP makes clear finding the right behavioral/mental models matters and breaking complex mental phenomena down into its simpler parts will be part of finding the most useful models for neural realization.

Ok, that’s it. The paper is accessible and readable and useful. Take a look.

[1] As we all know, the meaning of the name is just what it denotes so there is no semantic contribution that ‘wrong’ and ‘right’ make to WV and RV above.
[2] The quotes are to signal the possibility that Gallistel is right that much neuronal/cognitive computation takes place sub neuronally.
[3] Again, IMO, though I agree with the thrust of this position, it is very badly put. It is not behavior that is foundational but mentalistic accounts of behavior, the mechanisms that underlie it, that should be treated as foundational. In all cases, what we are interested in are the basic mechanisms not their products. The latter are interesting exactly to the degree that they illuminate the basic etiology.