In between eating too much and watching junk films, I received and found a couple of things that you might find interesting to look at. Here they are:
(1) To lead off here is a paper using the famous head turning technique to show that dogs might have some sense of words. In the long line of chinchilla research that showed that animals other than humans have apparent phonetic competence, this research argues that dogs can hear words. So far, however, no sign of any syntax, but I'll post anything that I see indicating otherwise. Personally, I always thought that my pet Porty Sampson understood most everything that was worth understanding.
Next come two pieces on neuroscience. The first is a methodological piece on fMRI to warm you up:
(2) Here. It manages to get to the heart of the method of localization very quickly.
The second piece (here), sent to me by Colin (thx) is a piece by Gary Marcus and friends on the dearth of theory in neuroscience and how this absence is sorely missed. Their general point is one that I cannot agree with more completely. As they put it:
Experimentalists need to work closely with data analysts and theorists to understand what can and should be asked, and how to ask it. A second lesson is that delineating the biological basis of behavior will require a rich understanding of behavior itself. A third is that understanding the nervous system cannot be achieved by a mere catalog of correlations. Big data alone aren’t enough.
Yes!! Note, btw, the second list item: you need to understand the "behavior." I would not have used this term myself. 'Competence' or 'capacity' would have been more apt. But, the gist is right. As regards neuro of language a knowledge of what the linguistic system does and the principles it uses will be important in isolating how the brain manages to do things. Trouble is that finding funding for this is not easy. Theory construction is pretty high risk. You often fail. And funding agencies don't like failure. They bet on sure outcomes. So how and whether the call Marcus et al make will ever be heeded is, IMO, very much up for grabs.
(3) Here is a last piece I enjoyed. It develops the theme that Marcus et al outline but form a much more rarefied perspective: whether there can be or should be theoretical unification across the sciences. This relates to the Marcus et al piece for one of the standard assumptions made by neuroscientists is that there is no real need to understand the mental side of things as these will anyhow all be reduced to wetware. This reductionist view has been pernicious, as it has supported the idea that while brains are real and deserving of study, minds can be bypassed in our understanding of how brains work. This belief is fueled by a reductionist mind (brain?) set that gets some nice discussion here.
One caveat: Chomsky has recently suggested that we drop the idea of reduction and replace it with some conception of unification (I confess to being a really BIG FAN of unification). Reduction suggests that the reducing science enjoys some kind of epistemological privilege, while the reduced one is more sketchy. As Chomsky has noted, more often than not the reducing science is the one that has had to fundamentally change to accommodate the "reduction." The piece linked to above, notes that reductions in any real sense are very rare (perhaps nonexistent). Indeed, given their paucity, unification is probably a better term for what has been called reduction. In addition, 'unification' has none of the epistemological suggestions that 'reduction' comes with. A good thing, IMO.
Last point: many of the points made by Pigliucci in the above post might also be fruitfully made about formalization. Here's what I mean. Findings in the special sciences are often treated invidiously until "reduced" to a more "basic" (i.e. prestige) science. However, more often than not, it is the results of the special science that endure the reduction while the reducing science changes radically (think chemistry and physics or biochemistry and genetics). In other words, the success of the reduction is measured (in large part) by how well the reduction conserves the results of the domain being reduced.
I think that the same is true of formalizations. Good formalizations preserve the insights of the domain being formalized. To be more concrete: the 19th century formalizations of the calculus preserved most of the results obtained using the more informal intuitive methods of the prior 250 years of research (so too with modern formalizations of Geometry). In fact, had it NOT done so, the formalization by Chauchy and Weierstrasse would have been deemed off the mark. I would go further: in a domain where there is a settled body of doctrine (one that is empirically and theoretically non-trivial (i.e. that withstands critical scrutiny)), both reduction and formalization must meet a common standard: preserve most of the doctrine of the reduced/formalized domain. As the rabbis used to say: "those who understand will understand."