Comments

Tuesday, March 26, 2019

A new player has entered the game

Here is a guest post from Alex Chabot and Tobias Scheer picking up a thread from about a year ago now. Bill



Alex Chabot & Tobias Scheer

What it is that is substance-free: computation and/or melodic primes

A late contribution to the debate...
In his post from April 12th, 2018, Veno has clarified his take on the status of melodic primes (features) in phonology (which is identical with the one exposed in the work by Hale & Reiss since 2000). The issue that gave rise to some misunderstanding and probably misconception about the kind of primes that Hale-Reiss-Volenec propose concerns their substance-free status: which aspect of them is actually substance-free and which one is not? This is relevant because the entire approach initiated by Hale & Reiss' 2000 LI paper has come to be known as substance-free.

Veno has thus made explicit that phonological features in his view are substance-laden, but that this substance does not bear on phonological computation. That is, phonological features bear phonetic labels ([labial], [coronal] etc.) in the phonology, but phonological computation ignores them and is able to turn any feature set into any other feature set in any context and its reverse. This is what may be called substance-free computation (i.e. computation that does not care for phonetics). At the same time, Veno explains, the phonetic information carried by the features in the phonology is used upon externalization (if we may borrow this word for phonological objects): it defines how features are pronounced (something called transduction by Hale-Reiss-Volenec, or phonetic implementation system PIS in Veno's post). That is, phonological [labial] makes sure that it comes out as something phonetically labial (rather than, say, dorsal). The correspondence between the phonological object and its phonetic exponent is thus firmly defined in the phonology - not by the PIS device.

The reason why Hale & Reiss (2003, 2008: 28ff) have always held that phonological features are substance-laden is learnability: they contend that cognitive categories cannot be established if the cognitive system does not know beforehand what kind of sensory input will come its way and relates to the particular category ("let's play cards"). Hence labiality, coronality etc. would be unparsable noise for the L1 learner did they not know at birth what labiality, coronality etc. is. Therefore, Hale-Reiss-Volenec conclude, substance-laden phonological features are universal and innate.

We believe that this take on melodic primes is misled (we talk about melodic primes since features are the regular currency, but there are also approaches that entertain bigger, holistic primes, called Elements. Everything that is said about features also applies to Elements). The alternative to which we subscribe is called radical substance-free phonology, where "radical" makes the difference with Hale-Reiss-Volenec: in this view both phonological computation and phonological primes are substance-free. That is, phonology is really self-contained in the Saussurian sense: no phonetic information is present (as opposed to: present but ignored). Melodic primes are thus alphas, betas and gammas: they assure contrast and infra-segmental decomposition that is necessary independently. They are related to phonetic values by the exact same spell-out procedure that is known from the syntax-phonology interface: vocabulary X is translated into vocabulary Y through a lexical access (Scheer 2014). Hence α ↔ labiality (instead of [labial] ↔ labiality).

1. Formulations

To start, the misunderstanding that Veno had the good idea to clarify was entertained by formulations like:

"[w]e understand distinctive features here as a particular kind of substance-free units of mental representation, neither articulatory nor acoustic in themselves, but rather having articulatory and acoustic correlates." Reiss & Volenec (2018: 253, emphasis in original)

Calling features substance-free when they are actually substance-laden is probably not a good idea. What is meant is that phonological computation is substance-free. But the quote talks about phonological units, not computation.

2. Incompatible with modularity

The ground rule of (Fodorian) modularity is domain specificity: computational systems can only parse and compute units that belong to a proprietary vocabulary that is specific to the system at hand. In Hale-Reiss-Volenec' view, phonological units are defined by extra-phonological (phonetic) properties, though. Hence given domain specificity phonology is unable to parse phonetically defined units such as [labial], [coronal] etc. Or else if "labial", "coronal" etc. are vocabulary items of the proprietary vocabulary used in phonological computation, this computation comprises both phonology and phonetics. Aside from the fact that there was enough  blurring these boundaries in the past two decades or so and that Hale-Reiss-Volenec have expressed themselves repeatedly in favour of a clear modular cut between phonetics and phonology, the architecture of their system  defines phonology and phonetics as two separate systems since it has a translational device (transduction, PIS) between them.

One concludes that phonological primes that are computed by phonological computation, but which bear phonetic labels (and in fact are not defined or differentiated by any other property), are a (modular) contradiction in terms.

To illustrate that, see what the equivalent would be in another linguistic module, (morpho‑)syntax: what would you say about syntactic primes such as number, animacy, person etc. which come along as "coronal", "labial" etc. without making any reference to number, animacy, person? That is, syntactic primes that are not defined by syntactic but by extra-syntactic (phonological) vocabulary? In this approach it would then be said that even though primes are defined by non-syntactic properties, they are syntactic in kind and undergo syntactic computation, which however ignores their non-syntactic properties.

This is but another way to state the common sense question prompted by a system where the only properties that phonological primes have are phonetic, but which are then ignored by phonological computation: what are the phonetic labels good for? They do not do any labour in the phonology, and they need to be actively ignored. Hale-Reiss-Volenec' answer was mentioned above: they exist because of learnability. This is what we address in the following point.

3. Learnability

Learnability concerns of substance-free melodic primes are addressed by Samuels (2012), Dresher (2018) and a number of contributions in Clements & Ridouane (2011). They are the focus of a recent ms by Odden (2019).

At a more general cognitive level, we know positively that the human brain/mind is perfectly able to make sense of sensory input that was never encountered and for sure is not innate. Making sense here means "transform a sensory input into cognitive categories". There are multiple examples of how electric impulses have been learned to be interpreted as either auditive or visual perception: cochlear implants on the one hand, so-called artificial vision, or bionic eye on the other hand. The same goes for production: mind-controlled prostheses are real. Hence Hale & Reiss' statement that nothing can be parsed by the cognitive system that wasn't present at birth (or that the cognitive system does not already know) appears to be just incorrect. Saying that unknown stimulus can lead to cognitive categories everywhere except in phonology seems a position that is hard to defend.

References

Clements, George N. & Rachid Ridouane (eds.) 2011. Where do Phonological Features come from? Cognitive, physical and developmental bases of distinctive speech categories. Amsterdam: Benjamins.

Dresher, Elan 2018. Contrastive Hierarchy Theory and the Nature of Features. Proceedings of the 35th West Coast Conference on Formal Linguistics 35: 18-29.

Hale, Mark & Charles Reiss 2000. Substance Abuse and Dysfunctionalism: Current Trends in Phonology. Linguistic Inquiry 31: 157-169.

Hale, Mark & Charles Reiss 2003. The Subset Principle in Phonology: Why the tabula can't be rasa. Journal of Linguistics 39: 219-244.

Hale, Mark & Charles Reiss 2008. The Phonological Enterprise. Oxford: OUP.

Odden, David 2019. Radical Substance Free Phonology and Feature Learning. Ms.

Reiss, Charles & Veno Volenec 2018. Cognitive Phonetics: The Transduction of Distinctive Features at the Phonology–Phonetics Interface. Biolinguistics 11: 251-294.

Samuels, Bridget 2012. The emergence of phonological forms. Towards a biolinguistic understanding of grammar: Essays on interfaces, edited by Anna Maria Di Sciullo, 193-213. Amsterdam: Benjamins.

15 comments:

  1. Thanks to Alex and Tobias for raising all these interesting issues. For my part, I'll work on clarifying (and maybe improving) my position for a contribution to the volume they are editing.
    (Note that Veno is the first author of our Biolinguistics paper, so "Volenec & Reiss").

    ReplyDelete
    Replies
    1. Hi Charles. I apologize for not checking the reference as I was (lightly) copy-editing the post.

      Delete
  2. I am glad to see that this discussion is being continued on FoL. I will give a reply to the post by Alex and Tobias within a week.

    ReplyDelete
  3. To contribute : on the last issue regarding babies innate skills => cf. the following reference.
    Kuhl P. (2004) Early language acquisition: cracking the speech code, Nature Reviews Neuroscience volume 5, pages 831–843 (2004)
    Very best.
    SW

    ReplyDelete
  4. Paul Boersma sent me this comment, which I am posting in his name: "My conclusion from learnability considerations is the exact opposite from that of Hale & Reiss. After all, what linguists tend to label as [voice] in language A has entirely different phonetic cues from what linguists tend to label as [voice] in language B. As argued in Boersma (2012, LabPhon handbook), this would cause a problem if [voice] were innate: how would a baby's innate phonological feature [voice] know what phonetic cues to connect to in her new environment? My conclusion then (as in 1998) was that phonological features emerge in the baby learner from the environment. So yes, phonological features are at the same time substance-free and non-innate: indeed alpha, beta and gamma in language A, and delta, epsilon and zeta in language B. That they all have phonetic *correlates* is no issue: the connection of the phonological feature with phonetic cues is a result of the same learning process that caused the feature to emerge in the first place; this does not mean that the phonetic cues can play a role in computation within the phonology (phonetic considerations can of course influence phonological decisions because they exert pressure at the interface, but that's not the same)."

    ReplyDelete
  5. there is consequently plenty in this article that i'd in no way have notion of occurring the order of for my very own. Your content gives readers things to assume pretty much in an engaging dependancy. thanks in your certain advice. bandar bola

    ReplyDelete
  6. Alex’s and Tobias’ post (ATp) attributes to me a position that I never held or argued for. Fortunately, once this straw man is removed from the picture, which I will do in the remainder of this post, it becomes clear that we are in agreement both on the substance-free nature of phonological features and on the modular nature of phonological competence.

    ATp write:
    “Veno has thus made explicit that phonological features in his view are substance-laden”

    I never did anything of the sort, neither in my last year’s post on this blog, nor in the published work with Charles (Volenec & Reiss 2018).

    Actually, my position was the exact opposite:
    “We understand distinctive features here as a particular kind of substance-free units of mental representation, neither articulatory nor acoustic in themselves, but rather having articulatory and acoustic correlates.” (V&R 2018: 253)

    “Outputs of the phonological module, surface representations (SRs) consisting of substance-free features, do not contain substantial and temporal information.” (V&R 2018: 262)

    “Assuming that phonological features can be regarded as abstract, substance-free mental units [...] the question is: How do features relate to phonetic substance?” (my last year’s post)

    The whole point of the V&R (2018) paper can briefly be stated thus: features are substance-free, therefore transduction is needed at the phonology-phonetics interface; let’s try to clarify the nature of that transduction. Therefore, I do not go about calling features substance-free while actually believing they are substance-laden, as suggested by ATp.

    In my last year’s post, I did point out that in the Hale & Reiss work an apparent contradiction can be found with respect to the substantive vs. substance-free nature of features, but that this apparent contradiction actually results from interchangeably talking about two separate things, namely, about features themselves and about their relation to phonetic substance.

    “In SFP literature two interesting and seemingly opposite formulations can be found: ‘substantive features’ (e.g., in Reiss 2016: 26; also 2018: 446 in the published version) and ‘substance-free features’ (e.g., in Hale & Kissock 2007: 84). While this introduces some confusion, it is not a contradiction. The notion ‘substantive features’ just means that, unlike for example certain formal aspects of rules (e.g., set unification, if you believe in that), features are somehow related to phonetic substance – the task is to find out how exactly. It is important to note that even in Halle (1983/2002: 108–109) [...] features are understood in this manner -- from the phonological point of view they are abstract and substance-free, and are related to phonetic substance indirectly” (my last year’s post)

    I thought it was perfectly clear that saying that features are somehow related to substance does not translate to the claim that features contain substance or *are* substance. Both in that post and in V&R (2018), I went on to state that there must be a lawful (non-arbitrary) relation between features and substance, but that this has nothing to do with phonology on its own (neither its representations nor its computations). Rather, it is a matter of the interaction between different modules (phonology and the sensory-motor system).

    [continued below]

    ReplyDelete
    Replies
    1. Perhaps this entire confusion arises because at one point I did entertain the possibility of substance being present in the features but being ignored by phonological computation. However, I immediately dismissed this option in favor of the substance-free conception of features:

      “[i]n principle it is not impossible that all that information is indeed encoded in features, and phonology systematically ignores it. Faced with this possibility, I’d be inclined to use the same line of reasoning that Chomsky (2013: 39) employs in arguing about the lack of linear order in syntax: If syntactic computation in terms of minimal structural distance always prevails over computation in terms of minimal linear distance, then the null-hypothesis should be that linear order is not available in syntax. If phonology systematically ignores certain information which are pertinent for speaking, then the null-hypothesis should be that that information is not available in phonology.” (my last year’s post)

      If I were writing this passage again, I would replace the word “available” with the word “present” to avoid any possible misunderstanding.

      I agree with ATp that considering features to be substance-laden while trying to maintain a modular conception of phonology is untenable. Fortunately, this issue does not arise with respect to my position, since I don’t consider features to be substance-laden. I also don’t think it really arises with respect to some older work by Hale and Reiss because, even though their formulations are sometime unclear and therefore open to misinterpretation, they only speak of substantive features in the context of the features' externalization, not in the context of how they exist in the phonological module.

      Delete
  7. Hi Veno, thanks for your clarification. I am speaking for myself here, perhaps Tobias will have more things to say. How does transduction work in your system if the primes have no labels? I'll start backwards: there is a phonetic object with [labial]ity, how does a transducer find the phonological object which is to be realized with labiality? This is the source of my confusion, it seems to me -- rather than being a strawman argument -- that your phonological object has to have some kind of phonetic information attached to it somewhere, else how could the transducer know what to do with it? Are your primes alphas and betas? I understand that it is via transduction that phonological objects are turned into phonetic ones, but it seems like a given phonological prime MUST be given a certain phonetic realization: phonological primes cannot escape their destiny, since transduction is non-arbitrary.

    This is what I think both Tobias and I mean by substance laden; though the primes act in computation as though they have nothing phonetic in them (as Volenic & Reiss says, neither articulatory nor acoustic), they must, since transduction is non-arbitrary. Transduction finds SPECIFIC objects and does realizes them in specific ways. I hope my question is clear.

    ReplyDelete
  8. Thanks Veno for clarifying your position, which Alex and I misread probably because of the issues below. So the good news is that we agree that melodic primes are substance-free (or, more generally speaking, that there is no phonetic information present anywhere in the phonology).
    Here are the issues:

    1. What then in your view is the phonological identity of primes? Alphas, betas and gammas or some other arbitrary distinction (feature 1, feature 2, feature 3 etc.)? I'm asking because as far as I can see the only phonological identities that appear in your work are phonetically defined: [labial] etc. Alex and I may not be the only readers who took these labels for real.

    2. If phonological primes have no phonetic identity, how can you avoid an arbitrary relationship between them and their phonetic realization? That's Alex' question: how is transduction able to know that and alpha (or feature 12) will be realized as a particular phonetic object? In absence of phonetic specification in the phonology, what prevents any phonetic object and its reverse to be associated to a given phonological prime? If primes are substance-free, children are born without any prior knowledge of how they will be pronounced and hence can only rely on environmental information to establish phonology-phonetics correspondences. If the environment varies randomly, then, so will correspondences.
    But maybe you also agree here, i.e. that the relationship between phonological and phonetic categories is arbitrary. Could you expand on your statement "there must be a lawful (non-arbitrary) relation between features and substance"? Does that mean that the transduction device operates in a lawful way and always translates a given phonological into the same phonetic object? That I agree on. Or do you mean that it can be predicted from the identity of the phonological prime how it will be pronounced in absence of language-specific evidence? Or, in other words, that some associations between phonological and phonetic categories are possible, while others are not?

    ReplyDelete
  9. One bit missing, didn't fit in the 4000 characters:

    3. It seems to me that the position you take is quite different from the one that Hale & Reiss have expressed over the years. You say that "they only speak of substantive features in the context of the features' externalization, not in the context of how they exist in the phonological module." I don't think this is the case. Hale & Reiss (2003, 2008: 28ff) are explicit about the fact that substantive features are innate. That's the whole point of their card argument (coming from Jackendoff): you can't parse anything that you don't know in advance will come your way. Hence, goes the argument, phonetic input will only be unparsable noise for phonology if the child did not know in advance that they need to look for [labial], [continuant] etc.
    Here are some relevant quotes, all from p.38 of Hale & Reiss (2008):

    "the learner must possess the relevant representational primitives within the learning domain" (emphasis in original).

    "relevant primitives" can hardly refer to substance-free primes: alphas and betas are not any more relevant than gammas and deltas. There is no "relevance" when primes are substance-free.

    "children must “know” (i.e. have innate access to) the set of phonological features used in all of the languages of the world."

    more of the same: there is a universal feature set with phonetic specifications. It doesn't make sense to talk about phonological features used in all languages if these features are alphas and betas. A universal feature set is opposed to a set that is not a member of the universal set. But there is nothing that alphas and betas can be opposed to.

    "Obviously, we are not claiming that the set of primitives of phonology corresponds exactly to the set of distinctive features referred to in the literature. There is no question that some of the features have yet to be identified or properly distinguished from others."

    again: there is a set of *phonological* primitives that corresponds to features whose precise nature, i.e. phonetic label, still needs to be figured out. It does not make sense to say that the set of phonological primes still needs to be figured out if these primes are alphas and betas: their identity is clear.
    Maybe Charles and Mark can make explicit their position on this issue. I remember a long discussion with Charles over lunch in Budapest in 2018 (Alex was there) where Charles defended the universal and innate character of phonological primes that have phonetic labels in the phonology.

    ReplyDelete
  10. Thanks for your comments and questions, Alex and Tobias. In this post, I will try to clarify my position on the nature of phonological features and their relation to phonetic substance, in order to answer the interesting questions that you posed.

    In a discussion about whether features are free of substance or loaded with it, we must first be clear about what exactly we mean by substance. This might seem trivial to you, but I am being cautious here because not being on the same page about the notion of substance led to significant confusion and misunderstanding in the discussion on this topic last year. I take substance to be the totality of the articulatory, acoustic and auditory properties and processes that constitute speech. For example, properties and processes of speech such as movements of the tongue, values of formants, loudness, duration expressed in milliseconds etc. fall under the rubric of substance. Phonological features are substance-free in the sense that they do not contain information about such properties and processes.

    What are features then? Features are symbols physically realized in the brain. These symbols have certain neurobiological properties shared by all symbols in the brain generally, and certain other properties that differentiate them from other kinds of symbols in the brain. The common properties of neurobiological symbols are (at least) distinguishability, constructability and efficacy. The standard assumption in cognitive neuroscience is that different symbols are distinguished by place coding of neural activity, rate coding, time coding, or, most likely, some combination of those. Of course, we are still far from being able to state how exactly features qua neural symbols are realized in the brain, but highly promising work such as Phillips (2000), Hickok (2012), Bouchard et al. (2013), Mesgarani et al. (2014) and Patel et al. (2018) are zoning-in on the importance of the neural activity in the superior-most part of the STG, BA44 and BA6. Features also meet the criteria of constructability: the hallmark of phonology is the notion that features can be grouped into bundles in order to construct higher-level, non-atomic data structures such as segments and syllables. This property of features removes the need for storing an excessive amount of complex material (e.g., the ‘mental syllabary’ by Levelt & co. comes to mind here) and allows the phonological module to construct complex symbols as need arises. Features are also an efficacious way of coding information since their free combining (assuming very little constraints, perhaps only ‘consistency’, which prohibits combining logically exclusive valued features such as +voiced and –voiced in a single bundle) leads to combinatoric explosion, as described in detail by Reiss (2012), and Matamoros & Reiss (2016). For example, if we assume that the brain stores only 30 features (which is slightly more than is usually proposed in the literature) and allow them to be modified by +, –, and 0 (i.e., binary features with underspecification), from this small set of primitive symbols we can construct 206 trillion different segments.

    [continued below]

    ReplyDelete
    Replies
    1. From these general properties of neural symbols, it can be clearly seen that features qua neural symbols are not alphas or betas, or 1s and 2s---they are neural activities, and an ongoing research question is to discover their exact neurobiological substrate. Since we are far from being able to refer to features by stating their neurobiological substrate, we are forced to resort to use symbols to refer to symbols. Thus, when we write [labial], we use the 2 brackets and the 6 letters to form a symbol for a particular feature, which is also symbol, just in the brain. To reiterate, [labial] is a (non-neural) symbol for a (neural) symbol. WE need these phonetic labels to know what we’re talking about, the brain doesn’t. Features don’t need such labels because the transduction algorithms interpret the identity of a feature by the place of the neural activity (or a combination of the activity’s place and firing rate) in the brain. This is similar to how a computer does not retrieve the identity of a symbol solely on the basis of its form (1s and 0s), but rather by combining the information about the form with the location and context in the memory. Possibly, the form of all features is the same---a neural spike. But more importantly, the unique location of the spike and/or the rate of its repetition is how the transducer determines the identity of the feature and ‘knows’ which neuromuscular schema (e.g., labiality and not, say, nasality) to assign to it. We can of course debate whether it is misleading or not to use phonetic labels such as [labial] to refer to features qua neural symbols and whether there is a better solution to this. But our decision about this issue has no bearing on the actual nature of features: the neural symbol is, of course, the same irrespective of whether we refer to it as [labial], alpha, or Jimmy.

      [continued below]

      Delete
    2. Features also seem to have some properties that are not generally shared by neural symbols, i.e., properties that make them phonological. On the one hand, features are manipulated by phonological operations, which should of course be understood as neural functions. The totality of the primitive phonological symbols and functions that manipulate them constitute the phonological module of the language faculty. On the other hand, features are interpreted by particular transduction algorithms at the interface between the phonological module and the sensory-motor system. Our Cognitive Phonetics theory (Volenec & Reiss 2018) spells out how this transduction proceeds (it seems to require at least two algorithms in speech production) and provides hypotheses about how this transduction is realized neurobiologically. Our proposed transduction algorithms are universal (on a more personal note, it seems silly to me even to entertain the thought that they are not---since these algorithms are *not* part of linguistic competence, it makes no sense to claim that something non-linguistic could be language-specific; just as there is no such thing as, say, a language-specific chair, there is no such thing as a language-specific transducer) and, less trivially, deterministic: via transduction a particular features always triggers the same neuromuscular schema. However, the deterministic, lawful, non-arbitrary nature of the transduction algorithms does not mean that the articulatory and the concomitant acoustic substance will always be identical for each feature. It will not, because transduction is just one step in externalization---many other factors that are engaged in linguistic performance will also play a role in determining the acoustic output from the body, all the way from mood to having a sore throat. Thus, we always get ‘lack of invariance’, but that has nothing to do with transduction of features.

      To summarize: Features are physical symbols in the brain that do not contain phonetic substance; their identity is determined by spatio-temporal context in the brain; they are connected to substance by universal and deterministic transduction algorithms which we described within Cognitive Phonetics.

      Finally, from all of this it is pretty clear how features, understood as atomic neurobiological symbols, could be innate, if they indeed are innate: their cortical substrate matures with the rest of the brain, following a (genetically) predetermined trajectory which is triggered by (but not significantly altered by) the environment.

      Delete
  11. Hi there, I found your blog via Google while searching for such kinda informative post and your post looks very interesting for me.Jogos 2019
    friv free online Games
    free online friv Games

    ReplyDelete