Comments

Sunday, March 31, 2019

Seeing with your tongue

EM: You refuse to look through my telescope.
TK (gravely): It's not a telescope, Errol. It's a kaleidoscope.
Exchange recounted in The Ashtray by Errol Morris (p12f)

Alex and Tobias in their post:

"At a more general cognitive level, we know positively that the human brain/mind is perfectly able to make sense of sensory input that was never encountered and for sure is not innate. Making sense here means "transform a sensory input into cognitive categories". There are multiple examples of how electric impulses have been learned to be interpreted as either auditive or visual perception: cochlear implants on the one hand, so-called artificial vision, or bionic eye on the other hand. The same goes for production: mind-controlled prostheses are real."

The nervous system can certainly "transform a sensory input into cognitive categories"; the question is how wild these transformations (transductions, interfaces) can be. No surprise, I'm going to say that they are highly constrained and therefore not fully arbitrary, basically limited to quasimorphisms. In the case of (visual) geometry, I think that we can go further and say that they are constrained to affine transformations and radial basis functions.



One of the better known examples of a neuromorphic sensory prosthetic device is the Brainport, an electrode array which sits on the surface of the tongue ( https://www.youtube.com/watch?v=48evjcN73rw ). The Brainport is a 2D sensor array, and so there is a highly constrained geometric relationship between the tongue-otopic coordinate system of the Brainport and the retinotopic one. As noted in the Wikipedia article, "This and all types of sensory substitution are only possible due to neuroplasticity." But neuroplasticity is not total, as shown by the homotopically limited range of language reorganization (Tivarus et al 2012).

So the thought experiment here consists of thinking about stranger tongue-otopic arrangements and whether they would work in a future Brainport V200 device.

1. Make a "funhouse" version of the Brainport. Flip the vertical and/or horizontal dimensions. This would be like wearing prismatic glasses. Reflections are affine transformations. This will work.

2. Make a color version of the Brainport. Provide three separate sensor arrays, one for each of the red, green and blue wavelengths. In the retina the different cone types for each "pixel" are intermixed (spatially proximate), in the color Brainport they wouldn't be. We would be effectively trying to use an analog of stereo vision computation (but with 3 "eyes") to do color registration and combination. It's not clear whether this would work.

3. Make a "kaleidoscope" version of the Brainport. Randomly connect the camera pixel array with the sensor array, such that adjacent pixels are no longer guaranteed to be adjacent on the sensor array. The only way to recover the adjacency information is via a (learned) lookup table. This is beyond the scope of affine transformations and radial basis functions. This will not work.

References

Liu S-C, Delbruck T 2010. Neuromorphic sensory systems. Current Opinion in Neurobiology, 20(3), 288–295.

Tivarus ME, Starling SJ, Newport EL, Langfitt JT 2012. Homotopic language reorganization in the right hemisphere after early left hemisphere injury. Brain and Language, 123(1), 1–10.

Friday, March 29, 2019

More on "arbitrary"

Bill Idsardi

Alex and Tobias have upped the ante, raised the stakes and doubled down in the substance debate, advocating a "radical substance-free" position in their post.

I had been pondering another post on this topic myself since reading Omer's comment on his blog "a parallel, bi-directional architecture is literally the weakest possible architectural assumption". So I guess Alex and Tobias are calling my bluff, and I need to show my cards (again).

So I agree that "substance abuse" is bad, and I agree that minimization of substantive relationships is a good research tactic, but "substance-free" is at best a misnomer, like this "100% chemical free hair dye" which shoppers assume isn't just an empty box. A theory lacking any substantive connection with the outside world would be a theory about nothing.


And there's more to the question of "substance" than just entities, there are also predicates and relations over those entities. If phonology is a mental model for speech then it must have a structure and an interpretation, and the degree of veridicality in the interpretation of the entities, predicates and relations is the degree to which the model is substantive. Some truths about the entities, predicates and relations in the outside world will be reflected in the model, that's its substance. The computation inside the model may be encapsulated, disconnected from events in the world, without an interesting feedback loop (allowing, say, for simulations and predictions about the world) but that's a separate concept.

As in the case discussed by Omer, a lot of the debate about "substance" seems to rest on architectural and interface assumptions (with the phonology-phonetics-motor-sensory interfaces often termed "transducers" with nods to sensory transducers, see Fain 2003 for an introduction). The position taken by substance-free advocates is that the mappings achieved by these interfaces/transducers (even stronger, all interfaces) are arbitrary, with the canonical example being a look-up table, as exhibited by the lexicon. For example, from Scheer 2018:

“Since lexical properties by definition do not follow from anything (at least synchronically  speaking), the relationship between the input and the output of this spell-out is arbitrary: there is no reason why, say, -ed, rather than -s, -et or -a realizes past tense in English.
    The arbitrariness of the categories that are related by the translational process is thus a necessary property of this process: it follows from the fact that vocabulary items on either side cannot be parsed or understood on the other side. By definition, the natural locus of arbitrariness is the lexicon: therefore spell-out goes through a lexical access.
    If grammar is modular in kind then all intermodular relationships must instantiate the same architectural properties. That is, what is true and undisputed for the upper interface of phonology (with morpho-syntax) must also characterize its lower interface (with phonetics): there must be a spell-out operation whose input (phonological categories) entertain an arbitrary relationship with its output (phonetic categories).” [italics in original, boldface added here]

Channeling Omer then, spell-out via lookup table is literally the weakest possible architectural assumption about transduction. A lookup table is the position of last resort, not the canonical example. Here's Gallistel and King (2009: xi) on this point:

“By contrast, a compact procedure is a composition of functions that is guaranteed to generate (rather than retrieve, as in table look-up) the symbol for the value of an n-argument function, for any arguments in the domain of the function. The distinction between a look-up table and a compact generative procedure is critical for students of the functional architecture of the brain.”

I think it may confuse some readers that Gallistel and King talk quite a bit about lookup tables, but they do say "many functions can be implemented with simple machines that are incomparably more efficient than machines with the architecture of a lookup table" (p. 53).

Jackendoff 1997:107f (who advocates a parallel, bi-directional architecture of the language faculty by the way) struggles to find analogs to the lexicon:

"One of the hallmarks of language, of course, is the celebrated "arbitrariness of the sign," the fact that a random sequence of phonemes can refer to almost anything. This implies, of course, that there could not be language without a lexicon, a list of the arbitrary matches between sound and meaning (with syntactic properties thrown in for good measure).
   If we look at the rest of the brain, we do not immediately find anything with these same general properties. Thus the lexicon seems like a major evolutionary innovation, coming as if out of nowhere."

Jackendoff then goes on to offer some possible examples of lexicon-like associations: vision with taste ("mashed potatoes and French vanilla ice cream don't look that different") and skilled motor movements like playing a violin or speaking ("again it's not arbitrary, but processing is speeded up by having preassembled units as shortcuts.") But his conclusion ("a collection of stored associations among fragments of disparate representations") is that overall "it is not an arbitrary mapping".

As I have said before, in my opinion a mapping has substance to the extent that it has partial  veridicality. (Max in the comments to the original post prefers "motivated" to what I called "non-arbitrary" but see Burling who draws a direct opposition between "motivated" and "arbitrary".)

So I have two points to re-emphasize about partial veridicality: it's partial and it displays some veridicality.

Partially, not completely, veridical

This is the easy part, and the linguists all get this one. (But it was a continuing source of difficulty for some neuro people in my grad cogsci course over the years.) The sensory systems of animals are limited in dynamic range and in many other ways. The whole concept of a “just noticeable difference” means that there are physical differences that are below the threshold of sensory detection.  The fact that red is next to violet on the color wheel is also an example of a non-veridical aspect of color perception.

These are relatively easy because they are a bit like existence proofs. We just need to find some aspect of the system that breaks a relationship at a single point across the interface. Using T to represent transduction, we need to find a relation R such that R(x,y) holds but TR(Tx,Ty) does not hold everywhere or vice versa. In the color wheel example the "external" relation is wavelength distance, and the "internal" relation is perceptual hue similarity; violet is perceptually similar to red even though the wavelength of violet is maximally distant from red in the visible spectrum. (But otherwise wavelength distance is a good predictor of perceptual similarity.) And this same argument extends to intermodular relationships within the visual system, as in the mapping between the RGB hue representation in the retina and the R/G-Y/B opponent process representation in the lateral geniculate nucleus.

Partially, not completely, arbitrary

I am never forget the day
I am given first original paper to write
It was on analytical algebraic topology
Of locally Euclidean metrization
Of infinitely differentiable Riemannian manifold
Боже мой!
This I know from nothing
Tom Lehrer, "Lobachevsky"

This is somewhat harder to think about because one has to imagine really crazy functions (i.e. arbitrary functions in the mathematical sense, full lookup table functions). To put my cards on the table, I don't believe sensory transducers are capable of computing arbitrary functions (the place to look for this would be the olfactory system). I think they are limited to quasimorphims, capable of making some changes in topology (e.g. line to circle in color vision) but the functions are almost everywhere differentiable, offering a connection with manifold learning (Jansen & Niyogi 2006, 2013). I think Gallistel and King (2009: x) have pretty much the same view (though I think "homomorphism" is slightly too strong):

“Representations are functioning homomorphisms. They require structure-preserving mappings (homomorphisms) from states of the world (the represented system) to symbols in the brain (the representing system). These mappings preserve aspects of the formal structure of the world.” 

So here's another bumper sticker slogan: preserved structure is substance. 

It's homomorphic not isomorphic so the structure is not completely preserved (it's only partially veridical). But it doesn't throw out all the structure, which includes not just entities but also relationships among entities.

A small example of this sort can be found in Heffner et al 2019. Participants were asked to learn new categories, mappings between sounds and colors, with the sounds drawn from a fricative continuum between [x] and [ç] (1-10), and the associated colors drawn from the various conditions shown in the figure.


I don't think it should come as much of a surprise that "picket fence" and "odd one out" are pretty hard for people to learn. So the point here is that there is structure in the learning mechanism; mappings with fewer discontinuities are preferred.

Here's a similar finding from gerbils (Ohl 2001, 2009):


Ohl et al 2009: "Animals trained on one or more training blocks never generalized to pure tones of any frequency (e.g. start or stop frequencies of the modulated tone, or frequencies traversed by the modulation or extrapolated from the modulation). This could be demonstrated by direct transfer experiments (Ohl et al 2001, supplementary material) or by measuring generalization gradients for modulation rate which never encompassed zero modulation rates (Ohl et al 2001)." [pure tones have a zero modulation rate -- WJI]

That is, the gerbils don't choose a picket fence interpretation either, although that would work here, based on the starting frequency of the tone. Instead, they find the function with the fewest discontinuities that characterizes the data, based on their genetic endowment of spectro-temporal receptive fields (STRFs) in their primary auditory cortex. They don't get to invent new STRFs, let alone create arbitrary ones. The genetic endowment provides the structure for the sensory transductions, and thus some functions are learnable while many are not. So the resulting functions are partially, but not completely arbitrary. And they have a limited number of discontinuities.

By the way, exemplar (instance-based) learning models have no trouble with picket fence arrangements, learning them as quickly as they learn the other types.

OK, I think that's enough for now. I'll address my take on the relative priority of features and segments in another post.

References

Fain GL 2003. Sensory Transduction. Sinauer.

Gallistel CR, King AP 2009. Memory and the Computational Brain. Wiley-Blackwell.

Heffner CC, Idsardi WJ, Newman RS 2019. Constraints on learning disjunctive, unidimensional auditory and phonetic categories. Attention, Perception & Psychophysics. https://doi.org/10.3758/s13414-019-01683-x

Jackendoff R 1997. The Architecture of the Language Faculty. MIT Press.

Jansen A, Niyogi P 2006. Intrinsic Fourier analysis on the manifold of speech sounds. IEEE ICASSP. Retrieved from https://ieeexplore.ieee.org/abstract/document/1660002/

Jansen A, Niyogi P 2013. Intrinsic Spectral Analysis. IEEE Transactions on Signal Processing, 61(7), 1698–1710.

Ohl FW, Scheich H, Freeman WJ 2001. Change in pattern of ongoing cortical activity with auditory category learning. Nature, 412(6848), 733–736.

Ohl FW, Scheich H 2009. The role of neuronal populations in auditory cortex for category learning. In Holscher C, Munk M (Eds.) Information Processing by Neuronal Populations. Cambridge University Press. 224-246.

Scheer T 2018. The workings of phonology and its interfaces in a modular perspective. In Annual conference of the Phonological Society of Japan. phsj.jp. Retrieved from http://phsj.jp/PDF/abstract_Scheer_forum2018.pdf

Wednesday, March 27, 2019

That 32GB flashdrive? It's 20,000 people worth of language acquisition information

Today in the Royal Society Open Science:

Mollica F, Piantadosi ST. 2019. Humans store about 1.5 megabytes of information during language acquisition. R. Soc. open sci. 6: 181393. http://dx.doi.org/10.1098/rsos.181393

Tuesday, March 26, 2019

A new player has entered the game

Here is a guest post from Alex Chabot and Tobias Scheer picking up a thread from about a year ago now. Bill



Alex Chabot & Tobias Scheer

What it is that is substance-free: computation and/or melodic primes

A late contribution to the debate...
In his post from April 12th, 2018, Veno has clarified his take on the status of melodic primes (features) in phonology (which is identical with the one exposed in the work by Hale & Reiss since 2000). The issue that gave rise to some misunderstanding and probably misconception about the kind of primes that Hale-Reiss-Volenec propose concerns their substance-free status: which aspect of them is actually substance-free and which one is not? This is relevant because the entire approach initiated by Hale & Reiss' 2000 LI paper has come to be known as substance-free.

Veno has thus made explicit that phonological features in his view are substance-laden, but that this substance does not bear on phonological computation. That is, phonological features bear phonetic labels ([labial], [coronal] etc.) in the phonology, but phonological computation ignores them and is able to turn any feature set into any other feature set in any context and its reverse. This is what may be called substance-free computation (i.e. computation that does not care for phonetics). At the same time, Veno explains, the phonetic information carried by the features in the phonology is used upon externalization (if we may borrow this word for phonological objects): it defines how features are pronounced (something called transduction by Hale-Reiss-Volenec, or phonetic implementation system PIS in Veno's post). That is, phonological [labial] makes sure that it comes out as something phonetically labial (rather than, say, dorsal). The correspondence between the phonological object and its phonetic exponent is thus firmly defined in the phonology - not by the PIS device.

The reason why Hale & Reiss (2003, 2008: 28ff) have always held that phonological features are substance-laden is learnability: they contend that cognitive categories cannot be established if the cognitive system does not know beforehand what kind of sensory input will come its way and relates to the particular category ("let's play cards"). Hence labiality, coronality etc. would be unparsable noise for the L1 learner did they not know at birth what labiality, coronality etc. is. Therefore, Hale-Reiss-Volenec conclude, substance-laden phonological features are universal and innate.

We believe that this take on melodic primes is misled (we talk about melodic primes since features are the regular currency, but there are also approaches that entertain bigger, holistic primes, called Elements. Everything that is said about features also applies to Elements). The alternative to which we subscribe is called radical substance-free phonology, where "radical" makes the difference with Hale-Reiss-Volenec: in this view both phonological computation and phonological primes are substance-free. That is, phonology is really self-contained in the Saussurian sense: no phonetic information is present (as opposed to: present but ignored). Melodic primes are thus alphas, betas and gammas: they assure contrast and infra-segmental decomposition that is necessary independently. They are related to phonetic values by the exact same spell-out procedure that is known from the syntax-phonology interface: vocabulary X is translated into vocabulary Y through a lexical access (Scheer 2014). Hence α ↔ labiality (instead of [labial] ↔ labiality).

1. Formulations

To start, the misunderstanding that Veno had the good idea to clarify was entertained by formulations like:

"[w]e understand distinctive features here as a particular kind of substance-free units of mental representation, neither articulatory nor acoustic in themselves, but rather having articulatory and acoustic correlates." Reiss & Volenec (2018: 253, emphasis in original)

Calling features substance-free when they are actually substance-laden is probably not a good idea. What is meant is that phonological computation is substance-free. But the quote talks about phonological units, not computation.

2. Incompatible with modularity

The ground rule of (Fodorian) modularity is domain specificity: computational systems can only parse and compute units that belong to a proprietary vocabulary that is specific to the system at hand. In Hale-Reiss-Volenec' view, phonological units are defined by extra-phonological (phonetic) properties, though. Hence given domain specificity phonology is unable to parse phonetically defined units such as [labial], [coronal] etc. Or else if "labial", "coronal" etc. are vocabulary items of the proprietary vocabulary used in phonological computation, this computation comprises both phonology and phonetics. Aside from the fact that there was enough  blurring these boundaries in the past two decades or so and that Hale-Reiss-Volenec have expressed themselves repeatedly in favour of a clear modular cut between phonetics and phonology, the architecture of their system  defines phonology and phonetics as two separate systems since it has a translational device (transduction, PIS) between them.

One concludes that phonological primes that are computed by phonological computation, but which bear phonetic labels (and in fact are not defined or differentiated by any other property), are a (modular) contradiction in terms.

To illustrate that, see what the equivalent would be in another linguistic module, (morpho‑)syntax: what would you say about syntactic primes such as number, animacy, person etc. which come along as "coronal", "labial" etc. without making any reference to number, animacy, person? That is, syntactic primes that are not defined by syntactic but by extra-syntactic (phonological) vocabulary? In this approach it would then be said that even though primes are defined by non-syntactic properties, they are syntactic in kind and undergo syntactic computation, which however ignores their non-syntactic properties.

This is but another way to state the common sense question prompted by a system where the only properties that phonological primes have are phonetic, but which are then ignored by phonological computation: what are the phonetic labels good for? They do not do any labour in the phonology, and they need to be actively ignored. Hale-Reiss-Volenec' answer was mentioned above: they exist because of learnability. This is what we address in the following point.

3. Learnability

Learnability concerns of substance-free melodic primes are addressed by Samuels (2012), Dresher (2018) and a number of contributions in Clements & Ridouane (2011). They are the focus of a recent ms by Odden (2019).

At a more general cognitive level, we know positively that the human brain/mind is perfectly able to make sense of sensory input that was never encountered and for sure is not innate. Making sense here means "transform a sensory input into cognitive categories". There are multiple examples of how electric impulses have been learned to be interpreted as either auditive or visual perception: cochlear implants on the one hand, so-called artificial vision, or bionic eye on the other hand. The same goes for production: mind-controlled prostheses are real. Hence Hale & Reiss' statement that nothing can be parsed by the cognitive system that wasn't present at birth (or that the cognitive system does not already know) appears to be just incorrect. Saying that unknown stimulus can lead to cognitive categories everywhere except in phonology seems a position that is hard to defend.

References

Clements, George N. & Rachid Ridouane (eds.) 2011. Where do Phonological Features come from? Cognitive, physical and developmental bases of distinctive speech categories. Amsterdam: Benjamins.

Dresher, Elan 2018. Contrastive Hierarchy Theory and the Nature of Features. Proceedings of the 35th West Coast Conference on Formal Linguistics 35: 18-29.

Hale, Mark & Charles Reiss 2000. Substance Abuse and Dysfunctionalism: Current Trends in Phonology. Linguistic Inquiry 31: 157-169.

Hale, Mark & Charles Reiss 2003. The Subset Principle in Phonology: Why the tabula can't be rasa. Journal of Linguistics 39: 219-244.

Hale, Mark & Charles Reiss 2008. The Phonological Enterprise. Oxford: OUP.

Odden, David 2019. Radical Substance Free Phonology and Feature Learning. Ms.

Reiss, Charles & Veno Volenec 2018. Cognitive Phonetics: The Transduction of Distinctive Features at the Phonology–Phonetics Interface. Biolinguistics 11: 251-294.

Samuels, Bridget 2012. The emergence of phonological forms. Towards a biolinguistic understanding of grammar: Essays on interfaces, edited by Anna Maria Di Sciullo, 193-213. Amsterdam: Benjamins.

Wednesday, March 13, 2019

Alec Marantz on the goals and methods of Generative Grammar

I always like reading papers aimed at non-specialists by leading lights of a specialty. This includes areas that I have some competence in. I find that I learn a tremendous amount from such non-technical papers for they self consciously aim to identify the big ideas that make an inquiry worth pursuing in the first place and the general methods that allow it to advance. This is why I always counsel students to not skip Chomsky's "popular" books (e.g. Language and Mind, Reflections on Language, Knowledge of Language, etc.).

Another nice (short) addition to this very useful literature is a paper by Alec Marantz (here): What do linguists do? Aside from giving a nice overview of how linguists work, it also includes a quick and memorable comment on Everett's (mis)understanding of his critique of GG. What Alec observes is that even if one takes Everett's claims entirely at face value empirically (which, one really shouldn't) his conclusion that Piraha is different in kind wrt the generative procedures it deploys from a language like English. Here is Alec:
His [Everett's, NH] analysis of Pirahã actually involves claiming Pirahã is just like every other language, except that it has a version of a mechanism that other languages use that, in Pirahã, limits the level of embedding of words within phrases.
I will let Alec explain the details, but what is important is that what he points out is that Everett confuses two very different issues that it is important to keep apart: what are the generative procedures that a given G deploys and what are the products of that procedure. Generative grammarians of the Chomsky stripe care a lot about the first question (what are the rule types that Gs can have). What Alec observes (and that Everett actually concedes in his specific proposal) is that languages that use the very same generative mechanisms can have very different products resulting. Who would have thunk it!

At any rate, take a look at Alec's excellent short piece. And while you are at it, you might want to read a short paper by another Syntax Master, Richie Kayne (here). He addresses  terrific question beloved by both neophytes and professionals: how many languages are there. I am pretty sure that his reply will both delight and provoke you. Enjoy.

Tuesday, March 12, 2019

Dan Milway discusses Katz's semantic theory

Dan Milway has an interesting project: reading Jerrold Katz's semantic investigations and discussing them for/with/near us. Here are two urls that discusses the preface and chapter 1 of Katz's 1972 Semantic Theory. Other posts are promised. I like these archeological digs into earlier thoughts on still murky matters. I suspect you will too.

Omer on the autonomy of syntax; though you will be surprised what the autonomy is from!

Here is a post from Omer that bears on the autonomy issue. There are various conceptions of autonomy. The weakest is simply the claim that syntactic relations cannot be reduced to any others. The standard conception is that it might reduce to semantic generalizations or probabilistic generalizations over stings (hence the utility of 'Colorless green ideas sleep furiously'). There are, however, stronger versions that relate to how different kinds of information intersect in derivations. And this is what Omer discusses: do the facts dictate that we allow phonological/semantic information intersperse with syntactic information to get the empirical trains to run on time. Omer takes on a recent suggestion that this is required and, imo, shreds the conclusion. At any rate, enjoy!

Wednesday, March 6, 2019

More on non-academic jobs

Last week Norbert linked to a Nature article on non-academic careers. This week, Nature has another piece which offers very simple advice: talk to the people at the career center at your university. I did exactly this when I was finishing my PhD at MIT, and ended up interviewing for several non-academic research and development positions in industry.

I should also say that my advisor, Morris Halle, told me that I should try being a professor first because in his opinion it was easier to go from an academic career to a non-academic one. I'm not sure that's really true, but I took his advice, and I'm still working as a professor so far.

Saturday, March 2, 2019

Two articles in Inference this week

Juan reviews Language in Our Brain: The Origins of a Uniquely Human Capacity
by Angela Friederici.

Bob and Noam respond to critics.