tag:blogger.com,1999:blog-52756572815092611562024-03-17T20:03:59.048-07:00Faculty of LanguageNorberthttp://www.blogger.com/profile/15701059232144474269noreply@blogger.comBlogger751125tag:blogger.com,1999:blog-5275657281509261156.post-8100364370490348252019-12-12T09:37:00.000-08:002019-12-12T09:37:24.024-08:00200k, 27M YBP? In <i>The Atlantic</i> today, a summary of a new article in <i>Science Advances</i> this week about speech evolution:<br />
<br />
<a href="https://www.theatlantic.com/science/archive/2019/12/when-did-ancient-humans-start-speak/603484/?utm_source=feed">https://www.theatlantic.com/science/archive/2019/12/when-did-ancient-humans-start-speak/603484/?utm_source=feed</a><br />
<br />
<a href="https://advances.sciencemag.org/content/5/12/eaaw3916">https://advances.sciencemag.org/content/5/12/eaaw3916</a><br />
<br />
I think Greg Hickok had the most trenchant comment, that people are hoping “that there was one thing that had to happen and that released the linguistic abilities.” And John Locke had the best bumper sticker, “Motor control rots when you die.”<br />
<br />
As the authors say in the article, recent work has shown that primate vocal tracts are capable of producing some vowel sounds:<br />
<br />
https://advances.sciencemag.org/content/2/12/e1600723<br />
<br />
<a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0169321">https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0169321</a><br />
<br />
This is certainly interesting from a comparative physiological perspective, and the article has a great summary of tube models for vowels. But I don't think that "producing vowel sounds" should be equated with "having speech" in the sense of "having a phonological system". My own feeling is that we should be looking for a couple of things. First, the ability to pair non-trivial sound sequences (phonological representations) with meanings in long term memory. Some nonhuman animals (including dogs) do have this ability, or something like it, so this isn't the lynch pin.<br />
<br />
http://science.sciencemag.org/content/304/5677/1682.short<br />
<br />
Second, the emergence of speech sound sequencing abilities in both the motor and perceptual systems. That is, the ability to perform computations over sequences; to compose, decompose and manipulate sequences of speech sounds, which includes concatenation, reduplication, phonotactic patterning, phonological processes and so on. The findings closest to showing this for nonhuman animals (birds in this case) that I am aware of are in:<br />
<br />
<a href="https://www.nature.com/articles/ncomms10986">https://www.nature.com/articles/ncomms10986</a><br />
<br />
<a href="https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.2006532">https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.2006532</a><br />
<br />
In those papers the debate is framed in terms of syntax, which I think is misguided. But the experiments do show some sound sequencing abilities in the birds which might coincide with some aspects of human phonological abilities. But, of course, this would be an example of convergent evolution, so it tells us almost nothing about the evolutionary history in primates.Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com125tag:blogger.com,1999:blog-5275657281509261156.post-23594023146271821762019-08-19T18:18:00.000-07:002019-08-19T18:18:46.442-07:00Herb Terrace on Nim and NoamFrom my colleague Greg Ball, news about a new <a href="https://www.psychologytoday.com/us/blog/the-origin-words">blog</a> and <a href="https://cup.columbia.edu/book/why-chimpanzees-cant-learn-language-and-only-humans-can/9780231171106">book</a> by Herb Terrace which will ask how words evolved (as opposed to grammar).Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com16tag:blogger.com,1999:blog-5275657281509261156.post-6417612848042969452019-07-08T11:40:00.004-07:002019-07-08T11:40:53.038-07:00 Interesting Post Doc possibility for linguists with cogneuro interestsWilliam Matchin sent me this post doc opportunity for posting on FoL<br />
<br />
<div class="MsoNormal" style="font-family: Garamond; margin: 0in 0in 0.0001pt;">
A Postdoctoral Fellow position is available at the University of South Carolina, under the direction of Prof. William Matchin in the NeuroSyntax laboratory. The post-doc will help develop new projects and lead the acquisition, processing, and analysis of behavioral and neuroimaging data. They will also assist with the organization of the laboratory and coordination of laboratory members. We are particularly interested in candidates with a background in linguistics who are interested in projects at the intersection of linguistics and neuroscience. For more information about our research program, please visit <a href="http://www.williammatchin.com/" style="color: #954f72;">www.williammatchin.com</a>.<o:p></o:p></div>
<div class="MsoNormal" style="font-family: Garamond; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNormal" style="font-family: Garamond; margin: 0in 0in 0.0001pt;">
Salary and benefits are commensurate with experience. The position is for one year, renewable for a second year, and potentially further pending the acquisition of grant funding.<o:p></o:p></div>
<div class="MsoNormal" style="font-family: Garamond; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNormal" style="font-family: Garamond; margin: 0in 0in 0.0001pt;">
The postdoctoral associate will work in close association with the Aphasia Lab (headed by Dr. Julius Fridriksson) as part of the NIH-funded Center for the Study of Aphasia Recovery (C-STAR). The NeuroSyntax lab is also part of the Linguistics program, the Neuroscience Community, and the Center for Mind and Brain at UofSC.<o:p></o:p></div>
<div class="MsoNormal" style="font-family: Garamond; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNormal" style="font-family: Garamond; margin: 0in 0in 0.0001pt;">
The University of South Carolina is in historic downtown Columbia, the capitol of South Carolina. Columbia is centrally located within the state, with a two-hour drive to the beach (including historic Charleston, SC) and the mountains (including beautiful Asheville, NC).<o:p></o:p></div>
<div class="MsoNormal" style="font-family: Garamond; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNormal" style="font-family: Garamond; margin: 0in 0in 0.0001pt;">
If you are interested in this position, please send an email to Prof. William Matchin <a href="https://www.blogger.com/null">matchin@mailbox.sc.edu</a>with your CV and a brief introduction to yourself, your academic background, and your research interests. You can find more details and apply online: <a href="https://uscjobs.sc.edu/postings/60022" style="color: #954f72;">https://uscjobs.sc.edu/postings/60022</a>.<o:p></o:p></div>
Norberthttp://www.blogger.com/profile/15701059232144474269noreply@blogger.com12tag:blogger.com,1999:blog-5275657281509261156.post-80820445937119337422019-07-03T14:56:00.000-07:002019-07-03T14:56:18.532-07:00Postdoc position at South CarolinaA Postdoctoral Fellow position is available at the University of South Carolina, under the direction of Prof. William Matchin in the NeuroSyntax laboratory. The post-doc will help develop new projects and lead the acquisition, processing, and analysis of behavioral and neuroimaging data. They will also assist with the organization of the laboratory and coordination of laboratory members. We are particularly interested in candidates with a background in linguistics who are interested in projects at the intersection of linguistics and neuroscience. For more information about our research program, please visit www.williammatchin.com.<br />
<br />
Salary and benefits are commensurate with experience. The position is for one year, renewable for a second year, and potentially further pending the acquisition of grant funding.<br />
<br />
The postdoctoral associate will work in close association with the Aphasia Lab (headed by Dr. Julius Fridriksson) as part of the NIH-funded Center for the Study of Aphasia Recovery (C-STAR). The NeuroSyntax lab is also part of the Linguistics program, the Neuroscience Community, and the Center for Mind and Brain at UofSC.<br />
<br />
The University of South Carolina is in historic downtown Columbia, the capitol of South Carolina. Columbia is centrally located within the state, with a two-hour drive to the beach (including historic Charleston, SC) and the mountains (including beautiful Asheville, NC).<br />
<br />
If you are interested in this position, please send an email to Prof. William Matchin matchin@mailbox.sc.edu with your CV and a brief introduction to yourself, your academic background, and your research interests. You can find more details and apply online: https://uscjobs.sc.edu/postings/60022.<br />
<br />Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com27tag:blogger.com,1999:blog-5275657281509261156.post-86500476454028472232019-06-17T14:02:00.003-07:002019-06-17T14:02:51.643-07:00The speed of evolution in domesticationIn <a href="https://www.pnas.org/content/early/2019/06/11/1820653116">PNAS</a> today an article on "Evolution of facial muscle anatomy in dogs" which argues for an adaptation of canine facial anatomy in the context of domestication.<br />
<br />
From the abstract: "Domestication shaped wolves into dogs and transformed both their behavior and their anatomy. Here we show that, in only 33,000 y, domestication transformed the facial muscle anatomy of dogs specifically for facial communication with humans."Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com179tag:blogger.com,1999:blog-5275657281509261156.post-27265993347646534942019-06-04T18:41:00.000-07:002019-06-04T18:41:29.855-07:00Bees learn symbols for the numbers 2 and 3In <a href="https://royalsocietypublishing.org/doi/full/10.1098/rspb.2019.0238">PTRSB</a> today, bees learned to associate "N" with groups of 2, and "⊥" with groups of 3.<br />
<br />
From the discussion: "Our findings show that independent groups of honeybees can learn and apply either a sign-to-numerosity matching task or a numerosity-to-sign matching task and subsequently apply acquired skills to novel stimuli. Interestingly, despite bees demonstrating a direct numerosity and sign association, they were unable to transfer the acquired skill to solve a reverse matching task."<br />
<br />
So, remembering watching Romper Room as a child, these bees are clearly Do Bees.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSw-PZ39hjIZR9IuXd4HXf1yV3zWJJYOT0DYakvCryO0pgnO4OFAqVRvcEW1B0-9FNcr9SSLVd7RgW6Ih6nevC1K7LYDuzl0bsfxi3nL_pEvgmm64XPRyrv9iMA135dEcdY8bGnBjaS8qu/s1600/dobee.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="818" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSw-PZ39hjIZR9IuXd4HXf1yV3zWJJYOT0DYakvCryO0pgnO4OFAqVRvcEW1B0-9FNcr9SSLVd7RgW6Ih6nevC1K7LYDuzl0bsfxi3nL_pEvgmm64XPRyrv9iMA135dEcdY8bGnBjaS8qu/s320/dobee.jpg" width="261" /></a></div>
<br />Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com11tag:blogger.com,1999:blog-5275657281509261156.post-88539709391379682652019-05-17T13:19:00.002-07:002019-05-17T13:19:30.756-07:00Why do geese honk?A <a href="https://podcast-a.akamaihd.net/mp3/podcasts/quirksaio-cNxDucjc-20190517.mp3">question </a>to <a href="https://www.cbc.ca/radio">CBC Radio</a>'s <a href="https://www.cbc.ca/radio/podcasts/science-and-tech/quirks-quarks---segments/">Quirks and Quarks</a> program, "Why do Canada geese honk while migrating?" The answer the CBC gives is "They honk to communicate their position in the flock". But Elan Dresher gave a different <a href="http://homes.chass.utoronto.ca/~dresher/col9.html">answer </a>back in 1996.Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com39tag:blogger.com,1999:blog-5275657281509261156.post-73686226571374078412019-05-17T12:34:00.002-07:002019-05-17T12:34:48.820-07:00New blog: OutdexHere's a new blog, <a href="https://outde.xyz/">Outdex</a>, featuring Thomas Graf (and friends), that should be of interest to people who read FoL. This week, Thomas has a <a href="https://outde.xyz/2019-05-15/underappreciated-arguments-the-inverted-t-model.html#underappreciated-arguments-the-inverted-t-model">post </a>on the "inverted T" (or "inverted Y") model of Generative Grammar.Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com1tag:blogger.com,1999:blog-5275657281509261156.post-12289420394393798442019-05-09T13:01:00.000-07:002019-05-09T13:01:18.120-07:00GG + NN = Thing 1 + Thing 2?<a href="https://muse.jhu.edu/journal/112">Language</a>, <strike>stealing</strike> adopting the <a href="https://www.cambridge.org/core/journals/behavioral-and-brain-sciences">BBS</a> model, has a <a href="https://muse.jhu.edu/issue/40022">target article</a> by Joe Pater and several replies.<br />
<br />
Here's my attempt at bumper-sticker summaries for the articles. You can add your own in the comments. (No, there are no prizes for this.)<br />
<br />
<a href="https://muse.jhu.edu/article/719231">Pater</a>: GG + NN + ?? = Profit!<br />
<br />
<a href="https://muse.jhu.edu/article/719233">Berent & Marcus</a>: Structure + Composition = Algebra<br />
<br />
<a href="https://muse.jhu.edu/article/719235">Dunbar</a>: Marr + Marr = Marr<br />
<br />
<a href="https://muse.jhu.edu/article/719237">Linzen</a>: NNs learn GGs sometimes, sorta<br />
<br />
<a href="https://muse.jhu.edu/article/719239">Pearl</a>: ?? = interpretability<br />
<br />
<a href="https://muse.jhu.edu/article/719241">Potts</a>: Functions + Logic = Vectors + DL<br />
<br />
<a href="https://muse.jhu.edu/article/719243">Rawski & Heinz</a>: No Free Lunch, but there is a GI tract<br />
<br />
Pater starts out with the observation that <i>Syntactic Structures</i> and "The perceptron: A perceiving and recognizing automaton" were both published in 1957.<br />
<br />
<a href="https://www.goodreads.com/book/popular_by_date/1957">Here</a> is a list of other things that were published in 1957 (hint: 116). It may say too much about me, but some of my favorites over the years from this list have included: <i>The Cat in the Hat, From Russia with Love, The Way of Zen, Endgame </i>and <i>Parkinson's Law</i>. But I'm afraid I can't really synthesize all that into an enlightened spy cat whose work expands to fill the nothingness. You can add your own mash-ups in the comments. (No, there are no prizes for this either.)<br />
<br />
<br />Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com9tag:blogger.com,1999:blog-5275657281509261156.post-21270050354155232062019-04-28T14:23:00.000-07:002019-04-28T14:23:21.706-07:00Scheering forcesI'll respond to Tobias in a post rather than a blog reply because he raises several points, and I want to include a picture or two.<br />
<br />
1. TS: "When you cross the real-world boundary, i.e. when real-world items (such as wave lengths) are mapped onto cognitive categories (colors perceived), you are talking about something else since the real world is not a module."<br />
<br />
The arguments I was making hold equally well for representations <b>within </b>the central nervous system (CNS), for example between the retina, the lateral geniculate nucleus and V1. Real-world spatial relations are mapped partially veridically onto the retina (due to the laws of optics). The spatial organization of the retina is (partially) maintained in the mapping to cortex; that is, LGN and V1 are <b>retinotopic</b>. So the modules here are the retina, LGN and V1, which are certainly modules within the CNS.<br />
<br />
The same sort of relationship is true for acoustic frequency, the cochlea, the medial geniculate nucleus (MGN), and A1. Acoustic frequencies are mapped partially veridically onto the coiled line of hair cells in the cochlea (due to laws of acoustics). That is, frequency is mapped into a spatial (place) code at the cochlea (this is not the only mechanism for low frequencies). And the cochlear organization is partially preserved in the mappings to MGN and A1, they are cochleotopic (= tonotopic). There <b>is </b>an "arbitrary" aspect here: frequency is represented with a spatial code. But the spatial code is not completely arbitrary or random, but organized and ordinal, such that frequency increases monotonically from the apex to the base in the cochlea, as shown in the diagram from Wikipedia, and is preserved in tonotopic gradients in A1. That is, the mappings between the modules are quasimorphisms.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5IhmaggsntRAdEQW1dbAqA2nI-m99oXq3Dhyphenhyphen6LevSQt_yZYDZ5KpYslXzzAB7zQA_YfyLzYhh-HdL6LLRH_6EkSvjNhQYbqimAPiNwLyId2mP5fOeDsSu78QrqRg7AWPnpALtiiynTSjR/s1600/512px-Uncoiled_cochlea_with_basilar_membrane.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="306" data-original-width="512" height="191" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5IhmaggsntRAdEQW1dbAqA2nI-m99oXq3Dhyphenhyphen6LevSQt_yZYDZ5KpYslXzzAB7zQA_YfyLzYhh-HdL6LLRH_6EkSvjNhQYbqimAPiNwLyId2mP5fOeDsSu78QrqRg7AWPnpALtiiynTSjR/s320/512px-Uncoiled_cochlea_with_basilar_membrane.png" width="320" /></a></div>
<br />
<br />
2. TS: "when I use the word "arbitrary" I only mean the above: the fact that any item of list A may be associated with any item of list B."<br />
<br />
Then I think you should find a different term. I also think there has been far too much focus on the items. As I have tried to explain, items enter into relationships with other items, and we need to consider the preservation of these relationships across the interface or the lack thereof; we need to keep track of the quasimorphisms. So it is not the case for many of the intermodular interfaces in sensation and perception that any item on one side of the interface can be mapped to any item on the other side of the interface. Spatial and temporal and other ordering relationships tend to be preserved across the interfaces, and this strongly constrains the mapping of individual items. Remarkably, this is true even in synesthesia, see Plate 9 from Cytowic 2018.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjD1hw0-YzFWT9g28-8JsKqhIo8YkJ9T2Lmd7O6RtR9hk1m2XgYCARZXdBNPzdfbcVt2RrdCxx89cLnv1h-3KkGHtIXVRTQaLRxmAq-pDUoW77txsFp6TGWVBTBL_FkCMGcaj3_NXChmVGF/s1600/syn.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="747" data-original-width="715" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjD1hw0-YzFWT9g28-8JsKqhIo8YkJ9T2Lmd7O6RtR9hk1m2XgYCARZXdBNPzdfbcVt2RrdCxx89cLnv1h-3KkGHtIXVRTQaLRxmAq-pDUoW77txsFp6TGWVBTBL_FkCMGcaj3_NXChmVGF/s320/syn.png" width="306" /></a></div>
<br />
3. TS: "That's all fine, but I am talking about the mind, not about the brain. Whatever the wirings in the brain, they won't tell us anything about how cognitive items of two distinct vocabularies are related (Vocabulary Insertion), or how a real-world item is associated to a cognitive category (wave length - color)."<br />
<br />
I am not a dualist, and I doubt that this blog is a good forum for a discussion of the merits of mind/body dualism. Here is a quote from Chomsky 1983 on the mind/brain, he reiterates this in Chomsky 2005:257 and in many other places.<br />
<br />
"Now, I think that there is every reason to suppose that the same kind of “modular” approach is appropriate for the study of the mind — which I understand to be the study, at an appropriate level of abstraction, of properties of the brain ..."<br />
<br />
Just to be clear, I am not saying that cognitive scientists should defer to neuroscientists, but they should talk to them. The idea that we have learned nothing about color perception and cognition from the study of the human visual pathway is simply false.<br />
<br />
4. TS: "is there evidence for interfaces that are not list-based?"<br />
<br />
Yes, almost any (non-linguistic) set of items with an ordering relation. When aspects of the ordering relation are preserved across the interface the mapping will be a quasimorphism, and thus the item-to-item mappings will be strongly constrained by this, that is, if a < b then f(a) <<sub>f</sub> f(b). What's unusual about the lexicon is that small changes in pronunciation can lead to enormous changes in meaning. In many of the other cases we instead end up with a very small, almost trivial look-up table, something like the sets of basis vectors for the two spaces, as with homomorphisms between groups in algebra.<br />
<br />
5. TS: "is there evidence for associations that correspond to partial veridicality, i.e. where the to-be-related items are commensurable, i.e. allow for the assessment of similarity?" ...<br />
"The same goes for the association of real-world items with cognitive categories: trying to assess the (dis)similarity of "450-485 nm" and "blue", as opposed to, say, "450-485 nm" and "red" (or any other perceived color for that matter) is pointless. Wave lengths and perceived colors are incommensurable and you won't be able to tell whether the match is veridical, non-veridical or partially veridical."<br />
<br />
This isn't pointless at all. In fact, remarkable progress has been made in this area. See, for example, Hardin 1988, Hardin & Maffi 1997, Palmer 1999 and Bird et al 2014. The match is partially veridical in a variety of ways. Small changes in spectral composition generally lead to small changes in perceived hue; the mapping is a quasimorphism. Importantly, the topology of the representation changes -- and thus is a non-veridical aspect of the mapping, from a linear relation to a circular one in the cone cells of the retina to an opponent process representation in LGN.<br />
<br />
6. TS: "The secret key is the look-up table that matches items of the two modules."<br />
<br />
I agree with this, except that I want the look-up table to be <b>as small as possible, the "basis vectors" for the spaces</b>. In my opinion, the best way to accomplish this is with innate initial look-up tables for the <b>features</b>, giving the learner the initial conditions for the Memory-Action and Perception-Memory mappings. The feature-learning approaches, including Mielke 2008, Dresher 2014 and Odden 2019, start with an ability to perceive IPA-like phonetic representations. I simply don't believe that this is a plausible idea, given how difficult even simple cases are for such an approach, as explained in Dillon, Dunbar & Idsardi 2013.<br />
<br />
References:<br />
<br />
Bird CM, Berens SC, Horner AJ & Franklin A. 2014. Categorical encoding of color in the brain. <i>Proceedings of the National Academy of Sciences</i>, 111(12), 4590–4595.<br />
<br />
Chomsky N. 1983. The Psychology of Language and Thought: Noam Chomsky interviewed by Robert W. Rieber. In RW Rieber (ed) <i>Dialogues on the Psychology of Language and Thought. </i>Plenum.<br />
<br />
Chomsky N. 2005. Reply to Lycan. In LM Antony & N Hornstein (eds) <i>Chomsky and his Critics.</i> Blackwell.<br />
<br />
Cytowic RE. 2018. <i>Synesthesia</i>. MIT Press.<br />
<br />
Dillon B, Dunbar E & Idsardi WJ. 2013. A single-stage approach to learning phonological categories: insights from Inuktitut. <i>Cognitive Science</i>, 37(2), 344–377.<br />
<br />
Hardin CL. 1988. <i>Color for Philosophers: Unweaving the Rainbow.</i> Hackett.<br />
<br />
Hardin CL & Maffi L. 1997. <i>Color Categories in Thought and Language.</i> Cambridge University Press.<br />
<br />
Palmer SE. 1999. <i>Vision Science: Photons to Phenomenology.</i> MIT Press.Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com13tag:blogger.com,1999:blog-5275657281509261156.post-51874847583174699072019-04-24T17:56:00.000-07:002019-04-24T17:56:11.785-07:00ECoG to speech synthesisIn Nature today, another fascinating <a href="https://www.nature.com/articles/s41586-019-1119-1">article</a> from Eddie Chang's lab at UCSF. They were able to synthesize intelligible speech from ECoG recordings of cortical activity in sensory-motor and auditory areas. The system was even able to decode and synthesize speech successfully from silently mimed speech. The picture (Figure 1 in the article) shows a block diagram of the system.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgB3WK3TsAgMbIv6bzk0HE55W9WwllfpulPn2O-lnBiQJjFZA2-Lyu1oPvktIyMicA5NxVLS5MP0z4Bot2whnaEwiI6OS6x7kMRhsydoY9WIo2L1IIn-ooUSAT0a2Kz-2isVPgQ1YqWg7li/s1600/41586_2019_1119_Fig1_HTML.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="595" data-original-width="900" height="262" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgB3WK3TsAgMbIv6bzk0HE55W9WwllfpulPn2O-lnBiQJjFZA2-Lyu1oPvktIyMicA5NxVLS5MP0z4Bot2whnaEwiI6OS6x7kMRhsydoY9WIo2L1IIn-ooUSAT0a2Kz-2isVPgQ1YqWg7li/s400/41586_2019_1119_Fig1_HTML.png" width="400" /></a></div>
<br />
There are also <a href="https://www.nature.com/articles/d41586-019-01181-y">two</a> <a href="https://www.nature.com/articles/d41586-019-01328-x">commentaries</a> on the work, along with some speech samples from the system.Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com4tag:blogger.com,1999:blog-5275657281509261156.post-24430116115023440502019-04-19T08:09:00.000-07:002019-04-19T08:09:35.690-07:00A possible EFP developmental trajectory from syllables to segmentsInfants can show a puzzling range of abilities and deficits in comparison with adults, out-performing adults on many phonetic perception tasks while lagging behind in other ways. Some null results using one procedure can be overturned with more sensitive procedures and some contrasts are "better" than others in terms of effect size and various acoustic or auditory measures of similarity (Sundara et al 2018). And there are other oddities about the infant speech perception literature, including the fact that the syllabic stimuli generally need to be much longer than the average syllable durations in adult speech (often twice as long). One persistent idea is that infants start with a syllable-oriented perspective and later move to a more segment-oriented one (Bertoncini & Mehler 1981), and that in some languages adults still have a primary orientation for syllables, at least for some speech production tasks (O'Seaghdha et al 2010; but see various replies, e.g. Qu et al 2012).<br />
<br />
More than a decade ago, I worked with Rebecca Baier and Jeff Lidz to try to investigate audio-visual (AV) integration in 2 month old infants (Baier et al 2007). Infants were presented with one audio track along with two synchronized silent movies of the same person (namely Rebecca) presented on a large TV screen. The movies were of different syllables being produced; the audio track generally matched one of the movies. Using this method we were able to replicate the results of Kuhl & Meltzoff 1982 that two month old infants are able to match faces and voices among /a/, /i/, and /u/. Taking this one step further, we were also able to show that infants could detect dynamic syllables, matching faces with for example /wi/ vs. /i/. We did some more poking around with this method, but got several results that were difficult to understand. One of them was a failure of the infants to match on /wi/ vs /ju/. (And we are pretty sure that "we" and "you" are fairly frequently heard words for English learning infants.) Furthermore, when they were presented with /ju/ audio alongside /wi/ and /i/ faces, they matched the /ju/ audio with the /wi/ video. This behavior is at least consistent with a syllable-oriented point of view: they hear a dynamic syllable with something [round] and something [front] in it, but they cannot tell the relative order of [front] and [round]. This also seems consistent with the relatively poor abilities of infants to detect differences in serial order (Lewkowicz 2004). Rebecca left to pursue electrical engineering and this project fell by the wayside.<br />
<br />
This is not to say that infants cannot hear a difference between /wi/ and /ju/, though. I expect that dishabituation experiments would succeed on this contrast. The infants would also not match faces for /si/ vs /ʃi/ but the dishabituation experiment worked fine on that contrast (as expected). So, certainly there are also task differences between the two experimental paradigms.<br />
<br />
But I think that now we may have a way to understand these results more formally, using the Events, Features and Precedence model discussed on the blog a year ago, and developed more extensively in Papillon 2018. In that framework, we can capture a /wi~ju/ syllable schematically as (other details omitted):<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5nMNmQdJNZTPsNNPRbXxdcTLpKfMdWVXc1dxeq9W1m8k7zxtK46pE0oo-Ok94iHUCWsEyJrDLox57wJUHs8x8uGGLdlptNbUhAWRoeI4IFaecRGnNtuRIBShu2mU9oh_BmsoQFPBlCSEZ/s1600/graphviz.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="487" data-original-width="702" height="221" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5nMNmQdJNZTPsNNPRbXxdcTLpKfMdWVXc1dxeq9W1m8k7zxtK46pE0oo-Ok94iHUCWsEyJrDLox57wJUHs8x8uGGLdlptNbUhAWRoeI4IFaecRGnNtuRIBShu2mU9oh_BmsoQFPBlCSEZ/s320/graphviz.png" width="320" /></a></div>
<br />
The relative ordering of [front] and [round] is underspecified here, as is the temporal extent of the events. The discrimination between /wi/ and /ju/ amounts to incorporating the relative ordering of [front] and [round], that is, which of the dashed lines is needed in:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQpc0-KceuIzU6ynUbUo6A1phZ2smXiQeExPj1vXH9VLm8p9_lA9wAjymNpjVTDs07QmHuvqA5wFokvkZJ5kuOuRUtLWYBMdM0ro0Pat9AKbgR3dJm-MAN44xX65kMI4cCj_DF17yWXefV/s1600/graphviz+%25281%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="472" data-original-width="702" height="215" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQpc0-KceuIzU6ynUbUo6A1phZ2smXiQeExPj1vXH9VLm8p9_lA9wAjymNpjVTDs07QmHuvqA5wFokvkZJ5kuOuRUtLWYBMdM0ro0Pat9AKbgR3dJm-MAN44xX65kMI4cCj_DF17yWXefV/s320/graphviz+%25281%2529.png" width="320" /></a></div>
<br />
<br />
When [round] precedes [front], that is the developing representation for /wi/; when [front] precedes [round] that is the developing representation for /ju/. Acquiring this kind of serial order knowledge between different features might not be that easy as it is possible that [front] and [round] are initially segregated into different streams (Bregman 1990) and order perception across streams is worse than that within streams. It's conceivable that the learner would be driven to look for additional temporal relations when the temporally underspecified representations incorrectly predict such "homophony", akin to hashing collision.<br />
<br />
If we pursue this idea more generally, the EFP graphs will gradually become more segment oriented as additional precedence relations are added, as in:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFYIv1QpuBQbg_A_fwlcBcXHNNsiVBOq-gkGCzUEd-v57QcLd3qVtOAkJUIS4VazitVqqTTw6V2LhbzPYfyoIbw08lLBh8wS7wq_ellriF7xl1s87yJ4FZLB9cflY_RFlzuYVa3dTjLw22/s1600/graphviz+%25282%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="408" data-original-width="702" height="185" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFYIv1QpuBQbg_A_fwlcBcXHNNsiVBOq-gkGCzUEd-v57QcLd3qVtOAkJUIS4VazitVqqTTw6V2LhbzPYfyoIbw08lLBh8wS7wq_ellriF7xl1s87yJ4FZLB9cflY_RFlzuYVa3dTjLw22/s320/graphviz+%25282%2529.png" width="320" /></a></div>
<br />
<br />
And if we then allow parallel, densely-connected events to fuse into single composite events we can get even closer to a segment oriented representation:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLKjmUHZnodmPOQgANt0hmV7YkmL3_FCEsWsqNXOUJnFaMQQI61LoZkoXp69sgZyNWX1Zk3NlGiBJ2e2cCfa0Yl2MMX-efc2lY3G1IN8feHnspt1CfLK8IARw1Cvw0xgpeyR1D5-NO9cE6/s1600/graphviz+%25283%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="288" data-original-width="793" height="116" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLKjmUHZnodmPOQgANt0hmV7YkmL3_FCEsWsqNXOUJnFaMQQI61LoZkoXp69sgZyNWX1Zk3NlGiBJ2e2cCfa0Yl2MMX-efc2lY3G1IN8feHnspt1CfLK8IARw1Cvw0xgpeyR1D5-NO9cE6/s320/graphviz+%25283%2529.png" width="320" /></a></div>
<br />
<br />
So the general proposal would be that the developing representation of the relative order of features is initially rather poor and is underspecified for order between features in different "streams". Testing this is going to be a bit tricky though. An even more general conclusion would be that features are not learned from phonetic segments (Mielke 2008) but that features are gradually combined during development to form segment sized units. We could also include other features encoding extra phonetic detail to these developing representations, it could then be the case that different phonetic features have different temporal acuity for the learners, and so cohere with other features to different extents.<br />
<br />
<b>References</b><br />
<br />
Baier, R., Idsardi, W. J., & Lidz, J. (2007). Two-month-olds are sensitive to lip rounding in dynamic and static speech events. In <i>Proceedings of the International Conference on Auditory-Visual Speech Processing.</i><br />
<br />
Bertoncini, J., & Mehler, J. (1981). Syllables as units in infant speech perception. <i>Infant behavior and development,</i> 4, 247-260.<br />
<br />
Bregman, A. (1990). <i>Auditory Scene Analysis. </i>MIT Press.<br />
<br />
Kuhl, P.K., and Meltzoff, A.N. (1982). The Bimodal perception of speech in infancy. <i>Science</i>, 218, 1138-1141.<br />
<br />
Lewkowicz, D. J. (2004). Perception of serial order in infants. <i>Developmental Science, </i>7(2), 175–184.<br />
<br />
Mielke, J. (2008). <i>The Emergence of Distinctive Features. </i>Oxford University Press.<br />
<br />
O’Seaghdha, P. G., Chen, J. Y., & Chen, T. M. (2010). Proximate units in word production: Phonological encoding begins with syllables in Mandarin Chinese but with segments in English. <i>Cognition</i>, 115(2), 282-302.<br />
<br />
Papillon, M. (2018). Precedence Graphs for Phonology: Analysis of Vowel Harmony and Word Tones. ms.<br />
<br />
Qu, Q., Damian, M. F., & Kazanina, N. (2012). Sound-sized segments are significant for Mandarin speakers. <i>Proceedings of the National Academy of Sciences, </i>109(35), 14265-14270.<br />
<br />
Sundara, M., Ngon, C., Skoruppa, K., Feldman, N. H., Onario, G. M., Morgan, J. L., & Peperkamp, S. (2018). Young infants’ discrimination of subtle phonetic contrasts. <i>Cognition</i>, 178, 57–66.Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com1tag:blogger.com,1999:blog-5275657281509261156.post-33494459543103111382019-04-09T20:23:00.001-07:002019-04-09T20:24:16.833-07:00Two cipher examplesI'm not convinced that I'm getting my point about "arbitrary" across, so maybe we should try some toy examples from a couple of ciphers. L<span style="text-align: center;">et's encipher "Colorless green ideas" in a couple of ways.</span><br />
<br />
1. <a href="https://www.rot13.com/">Rot13</a>: "Colorless green ideas" ⇒ "Pbybeyrff terra vqrnf". This method is familiar to old Usenet denizens. It makes use of the fact that the Latin alphabet has 26 letters by rotating them 13 places (a⇒n, b⇒o, ... m⇒z, n⇒a, o⇒b, ...) and so this method is its own inverse. That is, you decode a rot13 message by doing rot13 on it a second time. This is a special case of a Caesar cipher. Such ciphers are not very arbitrary as they mostly preserve alphabetic letter order, but they "wrap" the alphabet around into a circle (like color in the visual system) with "z" being followed by "a". In a rotation cipher, once you figure out one of the letter codes, you've got them all. So if "s" maps to "e" then "t" maps to "f" and so on.<br />
<br />
2. Scrambled alphabet cipher: Randomly permute the 26 letters to other letters, for example A..Z ⇒ PAYRQUVKMZBCLOFSITXJNEHDGW. This is a letter-based codebook. This is arbitrary, <i>at least from the individual letter perspective</i>, as it won't preserve alphabetic order, encoding "Colorless" as "Yfcftcqxx". So knowing one letter mapping (c⇒y) won't let you automatically determine the others.<br />
<br />
But this cipher <b>does preserve </b>various other properties, such as capitalization, number of distinct atomic symbols, spaces between words, message length, doubled letters, and sequential order in general.<br />
<br />
Even word-based code books tend to preserve sequential order. That is, the message is input word-by-word from the beginning of the message to the end. But more sophisticated methods are possible, for example by padding the message with irrelevant words. It's less common to see the letters of the individual words scrambled, but we could do that for words of varying lengths, say by having words of length 2 reversed, 21, so that "to" would be encoded as "to" ⇒ "ot" ⇒ "fj". And words of length three might be scrambled 312, length four as 2431, and so on, choosing a random permutation for each word length. Adding this encryption technique will break apart some doubled letters. But the word order would still be preserved across the encryption.<br />
<br />
These toy examples are just to show that "arbitrary" vs "systematic" isn't an all-or-nothing thing in a mapping. You have to consider all sorts of properties of the input and output representations and see which properties are being systematically preserved (or approximately preserved) across the mapping, and which are not. Temporal relations (like sequencing) are particularly important in this respect.<br />
<div>
<br /></div>
Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com19tag:blogger.com,1999:blog-5275657281509261156.post-916315694486145022019-04-06T15:51:00.000-07:002019-04-06T15:51:02.689-07:00ℝeℤolutionChabot 2019:<br />
<br />
"The notion that phonetic realizations of phonological objects function in an arbitrary fashion is counterintuitive at best, confounding at worst. However, order is restored to both phonology and phonetics if a modular theory of mind (Fodor 1983) is considered. In a modular framework, cognition is viewed as work carried out by a series of modules, each of which uses its own vocabulary and transmits inputs and outputs to other modules via interfaces known as transducers (Pylyshyn 1984; Reiss 2007), and the relationship between phonetics and phonology <i>must</i> be arbitrary. <b>This formalizes the intuition that phonology deals in the discrete while phonetics deals in the continuous.</b> A phonological object is an abstract cognitive unit composed of features or elements, with a phonetic realization that is a physical manifestation of that object located in time and space, which is composed of articulatory and perceptual cues." [italics in original, boldface added here]<br />
<br />
The implication seems to be that any relation between discrete and continuous systems is "arbitrary". However, there are non-arbitrary mappings between discrete and continuous systems. The best known is almost certainly the relationship between the integers (ℤ) and the reals (ℝ). There is a homomorphism (and only one) from ℤ into ℝ, and it is the obvious one that preserves addition (and lots of other stuff). Call this H. That is, H maps {... -1, 0, 1 ...} in ℤ to {... -1.0, 0.0, 1.0 ...} in ℝ (using the C conventions for ints and floats). Using + for addition over ℤ and +. for addition over ℝ, then H also takes + to +. (that is, we need to say what the group operation in each case is, this is important when thinking about symmetry groups for example). So now it is true that for all i, j in ℤ H(i + j) = H(i) +. H(j).<br />
<br />
However, mapping from ℝ onto ℤ (quantization, Q) is a much trickier business. One obvious technique is to map the elements of ℝ to the <a href="https://en.wikipedia.org/wiki/Nearest_integer_function">nearest integer</a> (i.e. to round them off). But this is not a homomorphism because there are cases where for some r, s in ℝ, Q(r +. s) ≠ Q(r) + Q(s), for example Q(1.6 +. 1.6) = Q(3.2) = 3, but Q(1.6) + Q(1.6) = 2 + 2 = 4. So the preservation of addition from ℝ to ℤ is only partial.<br />
<br />
<b>References</b><br />
<br />
Chabot A 2019. What’s wrong with being a rhotic?. <i>Glossa</i>, 4(1), 38. DOI: http://doi.org/10.5334/gjgl.618<br />
<br />Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com18tag:blogger.com,1999:blog-5275657281509261156.post-28688773018776498002019-04-04T08:00:00.001-07:002019-04-04T10:54:27.169-07:00Felinology(In memory of Felix d. 1976, Monty d. 1988, Jazz d. 1993)<br />
<br />
In <a href="https://www.nature.com/articles/d41586-019-01067-z">Nature</a> today, some evidence that cats can distinguish their own names. The cats were tested in their homes using a habituation-dishabituation method. This is in contrast to dogs, who have been tested using retrieval tasks, because "the training of cats to perform on command would require a lot of effort and time." From a quick scan of the article, it isn't clear if the foils for the names were minimal pairs though.Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com5tag:blogger.com,1999:blog-5275657281509261156.post-28815545356422512482019-04-04T07:21:00.000-07:002019-04-04T07:23:01.649-07:00Call for papers: Melodic primes in phonologyFrom Alex and Tobias:<br />
<br />
<hr />
Special issue of <i>Canadian Journal of Linguistics/Revue canadienne de linguistique</i><br />
Call for papers<br />
<br />
We are calling for high-quality papers addressing the status of <b>melodic primes in phonology</b>, in particular in substance-free phonology frameworks. That is, do phonological primes bear phonetic information, if so how much and in which guise exactly? How are melodic primes turned into phonetic objects? In the work of Hale & Reiss, who have coined the term substance-free phonology, it is only phonological computation which is unimpacted by phonetic substance, though it is, however, present in the phonology: melodic primes are still phonetic in nature, and their phonetic content determines how they will be realized as phonetic objects. We are interested in arguments which argue for the presence of phonetic information in melodic primes as well as an alternative position which sees melodic primes as being entirely void of phonetic substance.<br />
<br />
At the recent Phonological Theory Agora in Nice, there was some discussion regarding the implications a theory of substance-free melodic primes has for phonology; a variety of frameworks – including Optimality Theory, Government Phonology, and rule based approaches – have all served as a framework for theories which see melodic primes as entirely divorced from phonetic information. The special issue seeks to high-light some of those approaches, and is intended to spark discussion between advocates of the various positions and discussion between practitioners of different frameworks.<br />
<br />
We are especially interested in the implications a theory of substance-free primes has for research in a number of areas central to phonological theory, including: phonological representations, the acquisition of phonological categories, the form of phonological computation, the place of marginal phenomena such as “crazy rules” in phonology, the meaning of markedness, the phonology of signed languages, the nature of the phonetics/phonology interface, and more. Substance-free primes also raise big questions related to the question of emergence: are melodic primes innate or do they emerge through usage? How are phonological patterns acquired if primes are not innate?<br />
<br />
As a first step, contributors are asked to submit a <b>two page abstract</b> to the editor at alexander.chabot@univ-cotedazur.fr<br />
<br />
Contributions will be evaluated based on relevance for the special issue topic, as well as the overall quality and contribution to the field. Contributors of accepted abstracts will be invited to submit a full paper, which will undergo the standard peer review process. Contributions that do not fulfill the criteria for this special issue can, naturally, still be submitted to the <i>Canadian Journal of Linguistics/Revue canadienne de linguistique.</i><br />
<br />
Timeline:<br />
(a) June 1, 2019: deadline for abstracts, authors notified by July<br />
(b) December 2019: deadline for first submission<br />
(c) January 2020: sending out of manuscripts for review<br />
(d) March 2020: completion of the first round of peer review<br />
(e) June 2020: deadline revised manuscripts<br />
(f) August 2020: target date for final decision on revised manuscripts<br />
(g) October 2020: target date for submission of copy-edited manuscripts<br />
(h) CJL/RCL copy-editing of papers<br />
(i) End of 2020: Submission of copy-edited papers to Cambridge University Press (4 months before publication date).Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com2tag:blogger.com,1999:blog-5275657281509261156.post-27122097428636587882019-04-03T08:07:00.000-07:002019-04-03T08:07:07.896-07:00Dueling Fodor interpretationsBill Idsardi<br />
<br />
Alex and Tobias from their <a href="http://facultyoflanguage.blogspot.com/2019/03/a-new-player-has-entered-game.html">post</a>:<br />
<br />
"The ground rule of (Fodorian) modularity is domain specificity: computational systems can only parse and compute units that belong to a proprietary vocabulary that is specific to the system at hand."<br />
<br />
and<br />
<br />
"Hence Hale & Reiss' statement that nothing can be parsed by the cognitive system that wasn't present at birth (or that the cognitive system does not already know) appears to be just incorrect. Saying that unknown stimulus can lead to cognitive categories everywhere except in phonology seems a position that is hard to defend."<br />
<br />
I think both parties here are invoking Fodor, but with different emphases. Alex and Tobias are cleaving reasonably close to Fodor 1983 while Charles and Mark are continuing some points from Fodor 1980, 1998.<br />
<br />
But Fodor is a little more circumspect than Alex and Tobias about intermodular information transfer:<br />
<br />
Fodor 1983:46f: "<i>the input systems are modules ... </i>I imagine that within (and, quite possibly, across)[fn13]<i> </i>the traditional modes, there are highly specialized computational mechanisms in the business of generating hypotheses about the distal sources of proximal stimulations. The specialization of these mechanisms consists in constraints either on the range of information they can access in the course of projecting such hypotheses, or in the range of distal properties they can project such hypotheses about, or, most usually, on both."<br />
<br />
"[fn13] The "McGurk effect" provides fairly clear evidence for cross-modal linkages in at least one input system for the modularity of which there is independent evidence. McGurk has demonstrated that what are, to all intents and purposes, hallucinatory speech sounds can be induced when the subject is presented with a visual display of a speaker making vocal gestures appropriate to the production of those sounds. The suggestion is that (within, presumably, narrowly defined limits) mechanisms of phonetic analysis can be activated by -- and can apply to -- <i>either </i>acoustic <i>or</i> visual stimuli. It is of central importance to realize that the McGurk effect -- though cross-modal -- is itself domain specific -- viz., specific to language. A motion picture of a bouncing ball does not induce bump, bump, bump hallucinations. (I am indebted to Professor Alvin Liberman both for bringing McGurk's results to my attention and for his illuminating comments on their implications.)" [italics in original]<br />
<br />
I think this quote deserves a slight qualification, as there is now quite a bit of evidence for multisensory integration in the superior temporal sulcus (e.g. Noesselt et al 2012). As for "bump, bump, bump", silent movies of people speaking don't induce McGurk effects either. The cross-modal effect is broader than Fodor thought too, as non-speech visual oscillations that occur in phase with auditory oscillations do enhance brain responses in auditory cortex (Jenkins et al 2011).<br />
<br />
To restate my own view again, to the extent that the proximal is <b>partially veridical</b> with the distal, such computational mechanisms are substantive (both the elements and the relations between elements). The best versions of such computational mechanisms attempt to <b>minimize</b> <b>both</b> substance (the functions operate over a minimum number of variables about distal sources; they provide a compact encoding) and arbitrariness (the "dictionary" is as small as possible; it contains just the smallest fragments that can serve as a basis for the whole function; the encoding is compositional and minimizes discontinuities).<br />
<br />
And here's Fodor on the impossibility of inventing concepts:<br />
<br />
Fodor 1980:148: "Suppose we have a hypothetical organism for which, at the first stage, the form of logic instantiated is propositional logic. Suppose that at stage 2 the form of logic instantiated is first-order quantificational logic. ... Now we are going to try to get from stage 1 to stage 2 by a process of learning, that is, by a process of hypothesis formation and confirmation. Patently, it can't be done. Why? ... [Because] such a hypothesis can't be formulated with the conceptual apparatus available at stage 1; that is precisely the respect in which propositional logic is weaker than quantificational logic."<br />
<br />
Fodor 1980:151: "... there is no such thing as a concept being invented ... <i>It is not a theory of how you acquire concepts, but a theory of how the environment determines which parts of the conceptual mechanism in principle available to you are in fact exploited.</i>" [italics in original]<br />
<br />
You can select or activate a latent ability on the basis of evidence and criteria (the first order analysis might be much more succinct than the propositional analysis) but you can't build first order logic solely out of the resources of propositional logic. You have to have first order logic already available to you in order for you to choose it.<br />
<br />
<b>References</b><br />
<br />
Fodor JA 1980. On the impossibility of acquiring "more powerful" structures. Fixation of belief and concept acquisition. In M Piattelli-Palmarini (ed.) <i>Language and Learning: The Debate between Jean Piaget and Noam Chomsky.</i> Harvard University Press. 142-162.<br />
<br />
Fodor JA 1983. <i>Modularity of Mind.</i> MIT Press.<br />
<br />
Fodor JA 1998. <i>Concepts: Where Cognitive Science went Wrong.</i> Oxford University Press.<br />
<br />
Jenkins J, Rhone AE, Idsardi WJ, Simon JZ, Poeppel D 2011. The Elicitation of Audiovisual Steady-State Responses: Multi-Sensory Signal Congruity and Phase Effects. <i>Brain Topography</i>, 24(2), 134–148.<br />
<br />
Noesselt T, Bergmann D, Heinze H-J, Münte T, Spence C 2012. Coding of multisensory temporal patterns in human superior temporal sulcus. <i>Frontiers in Integrative Neuroscience</i>, 6, 64.<br />
<br />Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com11tag:blogger.com,1999:blog-5275657281509261156.post-72395558608184733682019-03-31T08:33:00.000-07:002019-03-31T08:33:33.642-07:00Seeing with your tongue<i>EM: You refuse to look through my telescope.</i><br />
<i>TK (gravely): It's not a telescope, Errol. It's a kaleidoscope.</i><br />
Exchange recounted in <i>The Ashtray </i>by Errol Morris (p12f)<br />
<br />
Alex and Tobias in their <a href="http://facultyoflanguage.blogspot.com/2019/03/a-new-player-has-entered-game.html">post</a>:<br />
<br />
"At a more general cognitive level, we know positively that the human brain/mind is perfectly able to make sense of sensory input that was never encountered and for sure is not innate. Making sense here means "transform a sensory input into cognitive categories". There are multiple examples of how electric impulses have been learned to be interpreted as either auditive or visual perception: cochlear implants on the one hand, so-called artificial vision, or bionic eye on the other hand. The same goes for production: mind-controlled prostheses are real."<br />
<br />
The nervous system can certainly "transform a sensory input into cognitive categories"; the question is how wild these transformations (transductions, interfaces) can be. No surprise, I'm going to say that they are highly constrained and therefore not fully arbitrary, basically limited to quasimorphisms. In the case of (visual) geometry, I think that we can go further and say that they are constrained to affine transformations and radial basis functions.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7-E2fNr2GONqNyWIevc7ZoqihSx856dcRcRp9lYOrqR6DfyjqCiH2fe85WbK878lbvZL_dAWKR12WAQMnjssESEGTOkvjbSjE0IEIrZS6x2nolPeDsAoz2LLesuFH1RebKG8yHSCRALHe/s1600/brainport.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="742" data-original-width="710" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7-E2fNr2GONqNyWIevc7ZoqihSx856dcRcRp9lYOrqR6DfyjqCiH2fe85WbK878lbvZL_dAWKR12WAQMnjssESEGTOkvjbSjE0IEIrZS6x2nolPeDsAoz2LLesuFH1RebKG8yHSCRALHe/s320/brainport.jpg" width="306" /></a></div>
<br />
<br />
One of the better known examples of a neuromorphic sensory prosthetic device is the Brainport, an electrode array which sits on the surface of the tongue ( <a href="https://www.youtube.com/watch?v=48evjcN73rw">https://www.youtube.com/watch?v=48evjcN73rw</a> ). The Brainport is a 2D sensor array, and so there is a highly constrained geometric relationship between the tongue-otopic coordinate system of the Brainport and the retinotopic one. As noted in the Wikipedia <a href="https://en.wikipedia.org/wiki/Sensory_substitution">article</a>, "This and all types of sensory substitution are only possible due to neuroplasticity." But neuroplasticity is not total, as shown by the homotopically limited range of language reorganization (Tivarus et al 2012).<br />
<br />
So the thought experiment here consists of thinking about stranger tongue-otopic arrangements and whether they would work in a future Brainport V200 device.<br />
<br />
1. Make a "funhouse" version of the Brainport. Flip the vertical and/or horizontal dimensions. This would be like wearing prismatic glasses. Reflections are affine transformations. This will work.<br />
<br />
2. Make a color version of the Brainport. Provide three separate sensor arrays, one for each of the red, green and blue wavelengths. In the retina the different cone types for each "pixel" are intermixed (spatially proximate), in the color Brainport they wouldn't be. We would be effectively trying to use an analog of stereo vision computation (but with 3 "eyes") to do color registration and combination. It's not clear whether this would work.<br />
<br />
3. Make a "kaleidoscope" version of the Brainport. Randomly connect the camera pixel array with the sensor array, such that adjacent pixels are no longer guaranteed to be adjacent on the sensor array. The only way to recover the adjacency information is via a (learned) lookup table. This is beyond the scope of affine transformations and radial basis functions. This will not work.<br />
<br />
<b>References</b><br />
<br />
Liu S-C, Delbruck T 2010. Neuromorphic sensory systems. <i>Current Opinion in Neurobiology</i>, 20(3), 288–295.<br />
<br />
Tivarus ME, Starling SJ, Newport EL, Langfitt JT 2012. Homotopic language reorganization in the right hemisphere after early left hemisphere injury. <i>Brain and Language</i>, 123(1), 1–10.<br />
<br />Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com4tag:blogger.com,1999:blog-5275657281509261156.post-3933025654774776082019-03-29T08:00:00.000-07:002019-03-29T08:00:32.578-07:00More on "arbitrary"Bill Idsardi<br />
<br />
Alex and Tobias have upped the ante, raised the stakes and doubled down in the substance debate, advocating a "radical substance-free" position in their post.<br />
<br />
I had been pondering another post on this topic myself since reading Omer's comment on his <a href="https://omer.lingsite.org/blogpost-adventures-in-modularity-phonological-optimization-edition/">blog</a> "a parallel, bi-directional architecture is literally the weakest possible architectural assumption". So I guess Alex and Tobias are calling my bluff, and I need to show my cards (again).<br />
<br />
So I agree that "substance abuse" is bad, and I agree that minimization of substantive relationships is a good research tactic, but "substance-free" is at best a misnomer, like this "100% chemical free hair dye" which shoppers assume isn't just an empty box. A theory lacking any substantive connection with the outside world would be a theory about nothing.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2gNboGyYaU_w0N6wfOajXpRnrRY2rMQeLXsKoxbN_Lc9fHzN0CcgOeT6fT4G5-5c5m3oN2MNXz3jXrCqUD_XtHO61Q8fYVLpMlbjWkmm5aLnQpGCeEiULKEcXmIbQrOCzKOsYASlITE4r/s1600/hairdye.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="679" data-original-width="411" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2gNboGyYaU_w0N6wfOajXpRnrRY2rMQeLXsKoxbN_Lc9fHzN0CcgOeT6fT4G5-5c5m3oN2MNXz3jXrCqUD_XtHO61Q8fYVLpMlbjWkmm5aLnQpGCeEiULKEcXmIbQrOCzKOsYASlITE4r/s320/hairdye.jpg" width="193" /></a></div>
<br />
And there's more to the question of "substance" than just entities, there are also predicates and relations over those entities. If phonology is a mental model for speech then it must have a structure and an interpretation, and the degree of veridicality in the interpretation of the entities, predicates and relations is the degree to which the model is substantive. Some truths about the entities, predicates and relations in the outside world will be reflected in the model, that's its substance. The computation inside the model may be <i>encapsulated</i>, disconnected from events in the world, without an interesting feedback loop (allowing, say, for simulations and predictions about the world) but that's a separate concept.<br />
<br />
As in the case discussed by Omer, a lot of the debate about "substance" seems to rest on architectural and interface assumptions (with the phonology-phonetics-motor-sensory interfaces often termed "transducers" with nods to sensory transducers, see Fain 2003 for an introduction). The position taken by substance-free advocates is that the mappings achieved by these interfaces/transducers (even stronger, <b>all</b> interfaces) are arbitrary, with the canonical example being a look-up table, as exhibited by the lexicon. For example, from Scheer 2018:<br />
<br />
“Since lexical properties by definition do not follow from anything (at least synchronically speaking), the relationship between the input and the output of this spell-out is <i>arbitrary</i>: there is no reason why, say, -<i>ed</i>, rather than -<i>s</i>, -<i>et</i> or -<i>a</i> realizes past tense in English.<br />
The arbitrariness of the categories that are related by the translational process is thus a <b>necessary</b> property of this process: it follows from the fact that vocabulary items on either side cannot be parsed or understood on the other side. By definition, the natural locus of arbitrariness is the lexicon: therefore spell-out goes through a lexical access.<br />
If grammar is modular in kind then <b>all intermodular relationships must instantiate the same architectural properties</b>. That is, what is true and undisputed for the upper interface of phonology (with morpho-syntax) must also characterize its lower interface (with phonetics): there must be a spell-out operation whose input (phonological categories) entertain <b>an arbitrary relationship</b> with its output (phonetic categories).” [italics in original, boldface added here]<br />
<br />
Channeling Omer then, spell-out via lookup table is literally the weakest possible architectural assumption about transduction. A lookup table is the position of last resort, not the canonical example. Here's Gallistel and King (2009: xi) on this point:<br />
<br />
“By contrast, a compact procedure is a composition of functions that is guaranteed to <i>generate</i> (rather than <i>retrieve</i>, as in table look-up) the symbol for the value of an <i>n</i>-argument function, for any arguments in the domain of the function. The distinction between a look-up table and a compact generative procedure is critical for students of the functional architecture of the brain.”<br />
<br />
I think it may confuse some readers that Gallistel and King talk quite a bit about lookup tables, but they do say "many functions can be implemented with simple machines that are incomparably more efficient than machines with the architecture of a lookup table" (p. 53).<br />
<br />
Jackendoff 1997:107f (who advocates a parallel, bi-directional architecture of the language faculty by the way) struggles to find analogs to the lexicon:<br />
<br />
"One of the hallmarks of language, of course, is the celebrated "arbitrariness of the sign," the fact that a random sequence of phonemes can refer to almost anything. This implies, of course, that there could not be language without a lexicon, a list of the arbitrary matches between sound and meaning (with syntactic properties thrown in for good measure).<br />
If we look at the rest of the brain, we do not immediately find anything with these same general properties. Thus the lexicon seems like a major evolutionary innovation, coming as if out of nowhere."<br />
<br />
Jackendoff then goes on to offer some possible examples of lexicon-like associations: vision with taste ("mashed potatoes and French vanilla ice cream don't look that different") and skilled motor movements like playing a violin or <b>speaking</b> ("again it's not arbitrary, but processing is speeded up by having preassembled units as shortcuts.") But his conclusion ("a collection of stored associations among fragments of disparate representations") is that overall "it is not an arbitrary mapping".<br />
<br />
As I have said before, in my opinion a mapping has substance to the extent that it has <b>partial veridicality</b>. (Max in the comments to the original <a href="http://facultyoflanguage.blogspot.com/2018/04/arbitrary-or-dogs-and-cats.html">post</a> prefers "motivated" to what I called "non-arbitrary" but see <a href="http://www-personal.umich.edu/~rburling/SantaFe.html">Burling</a> who draws a direct opposition between "motivated" and "arbitrary".)<br />
<br />
So I have two points to re-emphasize about partial veridicality: it's partial and it displays some veridicality.<br />
<br />
<b>Partially, not completely, veridical</b><br />
<br />
This is the easy part, and the linguists all get this one. (But it was a continuing source of difficulty for some neuro people in my grad cogsci course over the years.) The sensory systems of animals are limited in dynamic range and in many other ways. The whole concept of a “just noticeable difference” means that there are physical differences that are below the threshold of sensory detection. The fact that red is next to violet on the color wheel is also an example of a non-veridical aspect of color perception.<br />
<br />
These are relatively easy because they are a bit like existence proofs. We just need to find some aspect of the system that breaks a relationship at a single point across the interface. Using T to represent transduction, we need to find a relation R such that R(x,y) holds but TR(Tx,Ty) does not hold everywhere or vice versa. In the color wheel example the "external" relation is wavelength distance, and the "internal" relation is perceptual hue similarity; violet is perceptually similar to red even though the wavelength of violet is maximally distant from red in the visible spectrum. (But otherwise wavelength distance is a good predictor of perceptual similarity.) And this same argument extends to intermodular relationships within the visual system, as in the mapping between the RGB hue representation in the retina and the R/G-Y/B opponent process representation in the lateral geniculate nucleus.<br />
<div>
<br /></div>
<div>
<div>
<b>Partially, not completely, arbitrary</b></div>
</div>
<div>
<b><br /></b></div>
<div>
<div>
<i>I am never forget the day</i></div>
<div>
<i>I am given first original paper to write</i></div>
<div>
<i>It was on analytical algebraic topology</i></div>
<div>
<i>Of locally Euclidean metrization</i></div>
<div>
<i>Of infinitely differentiable Riemannian manifold</i></div>
<div>
<i>Боже мой!</i></div>
<div>
<i>This I know from nothing</i></div>
</div>
<div>
Tom Lehrer, "Lobachevsky"</div>
<div>
<br /></div>
<div>
This is somewhat harder to think about because one has to imagine really crazy functions (i.e. arbitrary functions in the mathematical sense, full lookup table functions). To put my cards on the table, I don't believe sensory transducers are capable of computing arbitrary functions (the place to look for this would be the olfactory system). I think they are limited to quasimorphims, capable of making some changes in topology (e.g. line to circle in color vision) but the functions are almost everywhere differentiable, offering a connection with manifold learning (Jansen & Niyogi 2006, 2013). I think Gallistel and King (2009: x) have pretty much the same view (though I think "homomorphism" is slightly too strong):</div>
<div>
<br /></div>
<div>
“Representations are functioning homomorphisms. They require structure-preserving mappings (homomorphisms) from states of the world (the represented system) to symbols in the brain (the representing system). These mappings preserve aspects of the formal structure of the world.” </div>
<div>
<br /></div>
<div>
So here's another bumper sticker slogan: preserved structure is substance. </div>
<div>
<br /></div>
<div>
It's <i>homo</i>morphic not <i>iso</i>morphic so the structure is not completely preserved (it's only partially veridical). But it doesn't throw out all the structure, which includes not just entities but also relationships among entities.</div>
<div>
<br /></div>
<div>
A small example of this sort can be found in Heffner et al 2019. Participants were asked to learn new categories, mappings between sounds and colors, with the sounds drawn from a fricative continuum between [x] and [ç] (1-10), and the associated colors drawn from the various conditions shown in the figure.</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVsBhkhgm9iCd61NmkWrPXt_-n0zL-nI5R7Qudi1ZOUaqYdgj8esiofRspM8jIbfj0wjmf6HXXRk19OuahfHgqlMbp4eT_j5NfCXRqEgTtmXATahlmisVvlmGqW2hP1lWkY45j8TMU1H3m/s1600/Heffner1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="264" data-original-width="459" height="184" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVsBhkhgm9iCd61NmkWrPXt_-n0zL-nI5R7Qudi1ZOUaqYdgj8esiofRspM8jIbfj0wjmf6HXXRk19OuahfHgqlMbp4eT_j5NfCXRqEgTtmXATahlmisVvlmGqW2hP1lWkY45j8TMU1H3m/s320/Heffner1.png" width="320" /></a></div>
<div>
<br /></div>
<div>
I don't think it should come as much of a surprise that "picket fence" and "odd one out" are pretty hard for people to learn. So the point here is that there is structure in the learning mechanism; mappings with fewer discontinuities are preferred.</div>
<div>
<br /></div>
<div>
Here's a similar finding from gerbils (Ohl 2001, 2009):</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYjhsNMSGlrqLlcri21us3Q9MsyqpAgXW7zzNXh-h1Xt_wq-pkJAmZOYrAVPi43Wwpzp-h97knG583DpVCHiSOy4uDupp9VwbRR3zxx-K6PK0mCjXiPoEDUkH4iNweSQLmRv6xamT3nrNb/s1600/image.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="263" data-original-width="363" height="231" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYjhsNMSGlrqLlcri21us3Q9MsyqpAgXW7zzNXh-h1Xt_wq-pkJAmZOYrAVPi43Wwpzp-h97knG583DpVCHiSOy4uDupp9VwbRR3zxx-K6PK0mCjXiPoEDUkH4iNweSQLmRv6xamT3nrNb/s320/image.png" width="320" /></a></div>
<div>
<br /></div>
<div>
Ohl et al 2009: "Animals trained on one or more training blocks never generalized to pure tones of any frequency (e.g. start or stop frequencies of the modulated tone, or frequencies traversed by the modulation or extrapolated from the modulation). This could be demonstrated by direct transfer experiments (Ohl et al 2001, supplementary material) or by measuring generalization gradients for modulation rate which never encompassed zero modulation rates (Ohl et al 2001)." [pure tones have a zero modulation rate -- WJI]</div>
<div>
<br /></div>
<div>
That is, the gerbils don't choose a picket fence interpretation either, although that would work here, based on the starting frequency of the tone. Instead, they find the function with the fewest discontinuities that characterizes the data, based on their genetic endowment of spectro-temporal receptive fields (STRFs) in their primary auditory cortex. They don't get to invent new STRFs, let alone create arbitrary ones. The genetic endowment provides the structure for the sensory transductions, and thus some functions are learnable while many are not. So the resulting functions are partially, but not completely arbitrary. And they have a limited number of discontinuities.</div>
<div>
<br /></div>
<div>
By the way, exemplar (instance-based) learning models have no trouble with picket fence arrangements, learning them as quickly as they learn the other types.</div>
<div>
<br /></div>
<div>
OK, I think that's enough for now. I'll address my take on the relative priority of features and segments in another post.</div>
<div>
<b><br /></b></div>
<div>
<b>References</b></div>
<div>
<b><br /></b></div>
<div>
Fain GL 2003. <i>Sensory Transduction</i>. Sinauer.</div>
<div>
<br /></div>
<div>
Gallistel CR, King AP 2009. <i>Memory and the Computational Brain</i>. Wiley-Blackwell.</div>
<div>
<br /></div>
<div>
Heffner CC, Idsardi WJ, Newman RS 2019. Constraints on learning disjunctive, unidimensional auditory and phonetic categories. <i>Attention, Perception & Psychophysics</i>. https://doi.org/10.3758/s13414-019-01683-x</div>
<div>
<br /></div>
<div>
Jackendoff R 1997. <i>The Architecture of the Language Faculty</i>. MIT Press.<br />
<br />
Jansen A, Niyogi P 2006. Intrinsic Fourier analysis on the manifold of speech sounds. <i>IEEE ICASSP</i>. Retrieved from https://ieeexplore.ieee.org/abstract/document/1660002/<br />
<br />
Jansen A, Niyogi P 2013. Intrinsic Spectral Analysis. <i>IEEE Transactions on Signal Processing, </i>61(7), 1698–1710.</div>
<div>
<br /></div>
<div>
Ohl FW, Scheich H, Freeman WJ 2001. Change in pattern of ongoing cortical activity with auditory category learning. <i>Nature</i>, 412(6848), 733–736.</div>
<div>
<br /></div>
<div>
<div>
Ohl FW, Scheich H 2009. The role of neuronal populations in auditory cortex for category learning. In Holscher C, Munk M (Eds.) <i>Information Processing by Neuronal Populations.</i> Cambridge University Press. 224-246.</div>
<div>
<br /></div>
</div>
<div>
Scheer T 2018. The workings of phonology and its interfaces in a modular perspective. In Annual conference of the Phonological Society of Japan. phsj.jp. Retrieved from http://phsj.jp/PDF/abstract_Scheer_forum2018.pdf</div>
Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com6tag:blogger.com,1999:blog-5275657281509261156.post-41231726835543116322019-03-27T13:46:00.000-07:002019-03-27T13:46:32.663-07:00That 32GB flashdrive? It's 20,000 people worth of language acquisition informationToday in the <a href="https://royalsocietypublishing.org/doi/full/10.1098/rsos.181393">Royal Society Open Science</a>:<br />
<br />
Mollica F, Piantadosi ST. 2019. Humans store about 1.5 megabytes of information during language acquisition. R. Soc. open sci. 6: 181393. http://dx.doi.org/10.1098/rsos.181393Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com1tag:blogger.com,1999:blog-5275657281509261156.post-39401181367861136472019-03-26T19:57:00.000-07:002019-03-26T19:57:39.118-07:00A new player has entered the gameHere is a guest post from Alex Chabot and Tobias Scheer picking up a <a href="http://facultyoflanguage.blogspot.com/2018/04/arbitrary-or-dogs-and-cats.html">thread</a> from about a year ago now. Bill<br />
<br />
<hr />
<br />
Alex Chabot & Tobias Scheer<br />
<b><br /></b>
<b>What it is that is substance-free: computation and/or melodic primes</b><br />
<br />
A late contribution to the debate...<br />
In his post from April 12th, 2018, Veno has clarified his take on the status of melodic primes (features) in phonology (which is identical with the one exposed in the work by Hale & Reiss since 2000). The issue that gave rise to some misunderstanding and probably misconception about the kind of primes that Hale-Reiss-Volenec propose concerns their substance-free status: which aspect of them is actually substance-free and which one is not? This is relevant because the entire approach initiated by Hale & Reiss' 2000 LI paper has come to be known as substance-free.<br />
<br />
Veno has thus made explicit that phonological features in his view are substance-laden, but that this substance does not bear on phonological computation. That is, phonological features bear phonetic labels ([labial], [coronal] etc.) in the phonology, but phonological computation ignores them and is able to turn any feature set into any other feature set in any context and its reverse. This is what may be called substance-free computation (i.e. computation that does not care for phonetics). At the same time, Veno explains, the phonetic information carried by the features in the phonology is used upon externalization (if we may borrow this word for phonological objects): it defines how features are pronounced (something called transduction by Hale-Reiss-Volenec, or phonetic implementation system PIS in Veno's post). That is, phonological [labial] makes sure that it comes out as something phonetically labial (rather than, say, dorsal). The correspondence between the phonological object and its phonetic exponent is thus firmly defined in the phonology - not by the PIS device.<br />
<br />
The reason why Hale & Reiss (2003, 2008: 28ff) have always held that phonological features are substance-laden is learnability: they contend that cognitive categories cannot be established if the cognitive system does not know beforehand what kind of sensory input will come its way and relates to the particular category ("let's play cards"). Hence labiality, coronality etc. would be unparsable noise for the L1 learner did they not know at birth what labiality, coronality etc. is. Therefore, Hale-Reiss-Volenec conclude, substance-laden phonological features are universal and innate.<br />
<br />
We believe that this take on melodic primes is misled (we talk about melodic primes since features are the regular currency, but there are also approaches that entertain bigger, holistic primes, called Elements. Everything that is said about features also applies to Elements). The alternative to which we subscribe is called radical substance-free phonology, where "radical" makes the difference with Hale-Reiss-Volenec: in this view both phonological computation and phonological primes are substance-free. That is, phonology is really self-contained in the Saussurian sense: no phonetic information is present (as opposed to: present but ignored). Melodic primes are thus alphas, betas and gammas: they assure contrast and infra-segmental decomposition that is necessary independently. They are related to phonetic values by the exact same spell-out procedure that is known from the syntax-phonology interface: vocabulary X is translated into vocabulary Y through a lexical access (Scheer 2014). Hence α ↔ labiality (instead of [labial] ↔ labiality).<br />
<br />
<b>1. Formulations</b><br />
<br />
To start, the misunderstanding that Veno had the good idea to clarify was entertained by formulations like:<br />
<br />
"[w]e understand distinctive features here as a particular kind of substance-free units of mental representation, neither articulatory nor acoustic in themselves, but rather having articulatory and acoustic <i>correlates</i>." Reiss & Volenec (2018: 253, emphasis in original)<br />
<br />
Calling features substance-free when they are actually substance-laden is probably not a good idea. What is meant is that phonological computation is substance-free. But the quote talks about phonological units, not computation.<br />
<br />
<b>2. Incompatible with modularity</b><br />
<br />
The ground rule of (Fodorian) modularity is domain specificity: computational systems can only parse and compute units that belong to a proprietary vocabulary that is specific to the system at hand. In Hale-Reiss-Volenec' view, phonological units are defined by extra-phonological (phonetic) properties, though. Hence given domain specificity phonology is unable to parse phonetically defined units such as [labial], [coronal] etc. Or else if "labial", "coronal" etc. are vocabulary items of the proprietary vocabulary used in phonological computation, this computation comprises both phonology and phonetics. Aside from the fact that there was enough blurring these boundaries in the past two decades or so and that Hale-Reiss-Volenec have expressed themselves repeatedly in favour of a clear modular cut between phonetics and phonology, the architecture of their system defines phonology and phonetics as two separate systems since it has a translational device (transduction, PIS) between them.<br />
<br />
One concludes that phonological primes that are computed by phonological computation, but which bear phonetic labels (and in fact are not defined or differentiated by any other property), are a (modular) contradiction in terms.<br />
<br />
To illustrate that, see what the equivalent would be in another linguistic module, (morpho‑)syntax: what would you say about syntactic primes such as number, animacy, person etc. which come along as "coronal", "labial" etc. without making any reference to number, animacy, person? That is, syntactic primes that are not defined by syntactic but by extra-syntactic (phonological) vocabulary? In this approach it would then be said that even though primes are defined by non-syntactic properties, they are syntactic in kind and undergo syntactic computation, which however ignores their non-syntactic properties.<br />
<br />
This is but another way to state the common sense question prompted by a system where the only properties that phonological primes have are phonetic, but which are then ignored by phonological computation: what are the phonetic labels good for? They do not do any labour in the phonology, and they need to be actively ignored. Hale-Reiss-Volenec' answer was mentioned above: they exist because of learnability. This is what we address in the following point.<br />
<br />
<b>3. Learnability</b><br />
<br />
Learnability concerns of substance-free melodic primes are addressed by Samuels (2012), Dresher (2018) and a number of contributions in Clements & Ridouane (2011). They are the focus of a recent <a href="https://ptanice.wordpress.com/program/">ms</a> by Odden (2019).<br />
<br />
At a more general cognitive level, we know positively that the human brain/mind is perfectly able to make sense of sensory input that was never encountered and for sure is not innate. Making sense here means "transform a sensory input into cognitive categories". There are multiple examples of how electric impulses have been learned to be interpreted as either auditive or visual perception: cochlear implants on the one hand, so-called <a href="https://en.wikipedia.org/wiki/Visual_prosthesis">artificial vision, or bionic eye</a> on the other hand. The same goes for production: mind-controlled <a href="https://en.wikipedia.org/wiki/Prosthesis">prostheses</a> are real. Hence Hale & Reiss' statement that nothing can be parsed by the cognitive system that wasn't present at birth (or that the cognitive system does not already know) appears to be just incorrect. Saying that unknown stimulus can lead to cognitive categories everywhere except in phonology seems a position that is hard to defend.<br />
<br />
<b>References</b><br />
<br />
Clements, George N. & Rachid Ridouane (eds.) 2011. <i>Where do Phonological Features come from? Cognitive, physical and developmental bases of distinctive speech categories</i>. Amsterdam: Benjamins.<br />
<br />
Dresher, Elan 2018. Contrastive Hierarchy Theory and the Nature of Features. <i>Proceedings of the 35th West Coast Conference on Formal Linguistics</i> 35: 18-29.<br />
<br />
Hale, Mark & Charles Reiss 2000. Substance Abuse and Dysfunctionalism: Current Trends in Phonology. <i>Linguistic Inquiry</i> 31: 157-169.<br />
<br />
Hale, Mark & Charles Reiss 2003. The Subset Principle in Phonology: Why the tabula can't be rasa. <i>Journal of Linguistics</i> 39: 219-244.<br />
<br />
Hale, Mark & Charles Reiss 2008. <i>The Phonological Enterprise.</i> Oxford: OUP.<br />
<br />
Odden, David 2019. Radical Substance Free Phonology and Feature Learning. Ms.<br />
<br />
Reiss, Charles & Veno Volenec 2018. Cognitive Phonetics: The Transduction of Distinctive Features at the Phonology–Phonetics Interface. <i>Biolinguistics</i> 11: 251-294.<br />
<br />
Samuels, Bridget 2012. The emergence of phonological forms. <i>Towards a biolinguistic understanding of grammar:</i> <i>Essays on interfaces</i>, edited by Anna Maria Di Sciullo, 193-213. Amsterdam: Benjamins.Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com15tag:blogger.com,1999:blog-5275657281509261156.post-6371127801676427472019-03-13T08:41:00.001-07:002019-03-14T08:38:47.742-07:00Alec Marantz on the goals and methods of Generative GrammarI always like reading papers aimed at non-specialists by leading lights of a specialty. This includes areas that I have some competence in. I find that I learn a tremendous amount from such non-technical papers for they self consciously aim to identify the big ideas that make an inquiry worth pursuing in the first place and the general methods that allow it to advance. This is why I always counsel students to not skip Chomsky's "popular" books (e.g. <i>Language and Mind, Reflections on Language, Knowledge of Language</i>, etc.).<br />
<br />
Another nice (short) addition to this very useful literature is a paper by Alec Marantz (<a href="http://as.nyu.edu/content/dam/nyu-as/as/documents/What%20do%20linguists%20do_For_PDF.pdf">here</a>): <i>What do linguists do?</i> Aside from giving a nice overview of how linguists work, it also includes a quick and memorable comment on Everett's (mis)understanding of his critique of GG. What Alec observes is that even if one takes Everett's claims entirely at face value empirically (which, one really shouldn't) his conclusion that Piraha is different in kind wrt the generative procedures it deploys from a language like English. Here is Alec:<br />
<div class="page" title="Page 12">
<div class="layoutArea">
<div class="column">
<blockquote class="tr_bq">
<span style="font-family: "timesnewromanpsmt"; font-size: 12pt;">His [Everett's, NH] analysis of Pirahã actually involves claiming Pirahã is just like every other language, except that it has a version of a mechanism that other languages use that, in Pirahã, limits the level of embedding of words within phrases.</span></blockquote>
I will let Alec explain the details, but what is important is that what he points out is that Everett confuses two very different issues that it is important to keep apart: what are the generative procedures that a given G deploys and what are the products of that procedure. Generative grammarians of the Chomsky stripe care a lot about the first question (what are the rule types that Gs can have). What Alec observes (and that Everett actually concedes in his specific proposal) is that languages that use the very same generative mechanisms can have very different products resulting. Who would have thunk it!<br />
<br />
At any rate, take a look at Alec's excellent short piece. And while you are at it, you might want to read a short paper by another Syntax Master, Richie Kayne (<a href="http://as.nyu.edu/content/dam/nyu-as/asSilverDialogues/documents/Venice%202016%20Silver_R.%20Kayne.pdf">here</a>). He addresses terrific question beloved by both neophytes and professionals: how many languages are there. I am pretty sure that his reply will both delight and provoke you. Enjoy.</div>
</div>
</div>
Norberthttp://www.blogger.com/profile/15701059232144474269noreply@blogger.com4tag:blogger.com,1999:blog-5275657281509261156.post-58663062359158701152019-03-12T07:43:00.000-07:002019-03-12T07:43:15.806-07:00Dan Milway discusses Katz's semantic theoryDan Milway has an interesting project: reading Jerrold Katz's semantic investigations and discussing them for/with/near us. Here are two urls that discusses the <a href="https://milway.ca/2019/02/16/katzs-semantic-theory-part-i/">preface</a> and<a href="https://www.blogger.com/(https://milway.ca/2019/02/16/katzs-semantic-theory-part-i/"> chapter 1</a> of Katz's 1972 <i>Semantic Theory</i>. Other posts are promised. I like these archeological digs into earlier thoughts on still murky matters. I suspect you will too.Norberthttp://www.blogger.com/profile/15701059232144474269noreply@blogger.com11tag:blogger.com,1999:blog-5275657281509261156.post-57837030677017329752019-03-12T07:37:00.000-07:002019-03-12T07:37:16.467-07:00Omer on the autonomy of syntax; though you will be surprised what the autonomy is from!<a href="https://omer.lingsite.org/blogpost-adventures-in-modularity-phonological-optimization-edition/">Here</a> is a post from Omer that bears on the autonomy issue. There are various conceptions of autonomy. The weakest is simply the claim that syntactic relations cannot be reduced to any others. The standard conception is that it might reduce to semantic generalizations or probabilistic generalizations over stings (hence the utility of 'Colorless green ideas sleep furiously'). There are, however, stronger versions that relate to how different kinds of information intersect in derivations. And this is what Omer discusses: do the facts dictate that we allow phonological/semantic information intersperse with syntactic information to get the empirical trains to run on time. Omer takes on a recent suggestion that this is required and, imo, shreds the conclusion. At any rate, enjoy!Norberthttp://www.blogger.com/profile/15701059232144474269noreply@blogger.com3tag:blogger.com,1999:blog-5275657281509261156.post-59272743250335609882019-03-06T07:44:00.000-08:002019-03-06T07:44:34.277-08:00More on non-academic jobsLast week Norbert linked to a Nature article on non-academic careers. This week, Nature has another <a href="https://www.nature.com/articles/d41586-019-00747-0">piece</a> which offers very simple advice: talk to the people at the career center at your university. I did exactly this when I was finishing my PhD at MIT, and ended up interviewing for several non-academic research and development positions in industry.<br />
<br />
I should also say that my advisor, Morris Halle, told me that I should try being a professor first because in his opinion it was easier to go from an academic career to a non-academic one. I'm not sure that's really true, but I took his advice, and I'm still working as a professor so far.Bill Idsardihttp://www.blogger.com/profile/10570926308058368183noreply@blogger.com4