Sunday, March 31, 2019

Seeing with your tongue

EM: You refuse to look through my telescope.
TK (gravely): It's not a telescope, Errol. It's a kaleidoscope.
Exchange recounted in The Ashtray by Errol Morris (p12f)

Alex and Tobias in their post:

"At a more general cognitive level, we know positively that the human brain/mind is perfectly able to make sense of sensory input that was never encountered and for sure is not innate. Making sense here means "transform a sensory input into cognitive categories". There are multiple examples of how electric impulses have been learned to be interpreted as either auditive or visual perception: cochlear implants on the one hand, so-called artificial vision, or bionic eye on the other hand. The same goes for production: mind-controlled prostheses are real."

The nervous system can certainly "transform a sensory input into cognitive categories"; the question is how wild these transformations (transductions, interfaces) can be. No surprise, I'm going to say that they are highly constrained and therefore not fully arbitrary, basically limited to quasimorphisms. In the case of (visual) geometry, I think that we can go further and say that they are constrained to affine transformations and radial basis functions.

One of the better known examples of a neuromorphic sensory prosthetic device is the Brainport, an electrode array which sits on the surface of the tongue ( ). The Brainport is a 2D sensor array, and so there is a highly constrained geometric relationship between the tongue-otopic coordinate system of the Brainport and the retinotopic one. As noted in the Wikipedia article, "This and all types of sensory substitution are only possible due to neuroplasticity." But neuroplasticity is not total, as shown by the homotopically limited range of language reorganization (Tivarus et al 2012).

So the thought experiment here consists of thinking about stranger tongue-otopic arrangements and whether they would work in a future Brainport V200 device.

1. Make a "funhouse" version of the Brainport. Flip the vertical and/or horizontal dimensions. This would be like wearing prismatic glasses. Reflections are affine transformations. This will work.

2. Make a color version of the Brainport. Provide three separate sensor arrays, one for each of the red, green and blue wavelengths. In the retina the different cone types for each "pixel" are intermixed (spatially proximate), in the color Brainport they wouldn't be. We would be effectively trying to use an analog of stereo vision computation (but with 3 "eyes") to do color registration and combination. It's not clear whether this would work.

3. Make a "kaleidoscope" version of the Brainport. Randomly connect the camera pixel array with the sensor array, such that adjacent pixels are no longer guaranteed to be adjacent on the sensor array. The only way to recover the adjacency information is via a (learned) lookup table. This is beyond the scope of affine transformations and radial basis functions. This will not work.


Liu S-C, Delbruck T 2010. Neuromorphic sensory systems. Current Opinion in Neurobiology, 20(3), 288–295.

Tivarus ME, Starling SJ, Newport EL, Langfitt JT 2012. Homotopic language reorganization in the right hemisphere after early left hemisphere injury. Brain and Language, 123(1), 1–10.


  1. very interesting post.this is my first time visit here.i found so mmany interesting stuff in your blog especially its discussion..thanks for the post!Jogo para criança online
    play Games friv
    free online friv Games

  2. Can I just say what a comfort to uncover somebody that genuinely knows what they are discussing on the net. You certainly understand how to bring a problem to light and make it important.

  3. A lot more peoople must read his and understand this siode of thhe story.
    I was surprised that you're not more popular since you definitely possess the gift.