Author Topic: Phonology is not grounded in phonetics  (Read 7423 times)

Offline zaba

  • Serious Linguist
  • ****
  • Posts: 272
Phonology is not grounded in phonetics
« on: March 05, 2014, 10:41:59 AM »
Yesterday I heard some guy say that

Quote
Phonology is not grounded in phonetics because  the facts which phonetic grounding explains can be derived without phonology.

If you believe this to be true, why? It would be great if you could provide an example so that somebody like me could understand better. Thanks!

Offline MalFet

  • Global Moderator
  • Serious Linguist
  • *****
  • Posts: 282
  • Country: us
Re: Phonology is not grounded in phonetics
« Reply #1 on: March 07, 2014, 02:42:36 AM »
I'm having a really hard time parsing that sentence, to be honest. What are "the facts which phonetic grounding explains"? I'd assume he's appealing to some kind of markedness hierarchy, but that's only a guess.


Offline MalFet

  • Global Moderator
  • Serious Linguist
  • *****
  • Posts: 282
  • Country: us
Re: Phonology is not grounded in phonetics
« Reply #3 on: March 07, 2014, 04:47:40 AM »
What they're describing is a very "classical" generativist approach to phonology, which tries to describe the faculty of language in strictly computational terms. Furthermore, they're dismissing typological generalizations as irrelevant to their task. They're saying they don't need to appeal to phonetics to explain whatever it is they're trying to explain. I'm still not sure exactly what that is, but depending on how narrowly they construe their problem they may very well be right.

Offline panini

  • Linguist
  • ***
  • Posts: 194
Re: Phonology is not grounded in phonetics
« Reply #4 on: June 03, 2015, 06:30:51 PM »
What they are trying to do is state what computational thing a phonology is. The fundamental premise is that there is a mental computation whereby the affix /z/ is combined with noun roots to form plurals in English, and there is a computation which inserts [ɨ] in a certain context, and which changes z to s in a certain context. The theory of computations says what the form of such computations is (it could be "constraints" ruling out structures; it could be "rules" mapping specific strings to strings; it might operate on conjunctions of individual features; it could operate on collections of unanalyzable segmental atoms). Two things which are not part of that theory of computation are (a) statements of what the probability is that a particular rule will exist and (b) statements reifying the functional principles leading to those possibilities. Those considerations are outside of the theory of grammatical computation, and instead are in the domain of theories of perception, phonetics, and language change.

I also think it is not accurate to impute a desire to "explain" to their research program; rather, the goal is to correctly describe. In their theory, the explanation for final devoicing in German is that the facts of the language lead unavoidably to the existence of such a rule, and the facts of the language as learned by children determines what the rules are. An explanation for why German historically developed final devoicing centuries ago would be in the domain of historical linguistics (which would owe a lot to phonetic theories), and the history of German.

Offline Copernicus

  • Linguist
  • ***
  • Posts: 61
  • Country: us
    • Natural Phonology
Re: Phonology is not grounded in phonetics
« Reply #5 on: June 04, 2015, 05:30:43 PM »
Final obstruent devoicing exists in a large number of languages throughout Europe.  Generally speaking, the physiological explanation is simple.  Any constriction in the oral cavity causes back-pressure that tends to counteract the pressure drop across the glottis that is needed to maintain voicing.  That is why you can maintain a [p] indefinitely, but you risk a head explosion if you try to maintain a [b] indefinitely. 

Malfet is right that they've bought into the classic SPE approach, which was actually under criticism in the 1960s, before the 1968 publication.  (We had all been receiving mimeographed copies of the chapters for quite a while.)  Markedness theory figures prominently in their thinking, but the real progenitor of markedness constraints was Roman Jakobson's so-called "implicational universals" (mistranslated as "universal rules of solidarity" in Child Language, Aphasia, and Phonological Universals).  Jakobson, in turn, got the germ of the idea from Baudouin de Courtenay, who had noticed such trends. 

Given the physiology of producing voiced obstruents, you do not need Markedness Theory to predict that large numbers of languages will have devoicing processes.  The question is how children come to learn them.  It should be no surprise that we see such processes running rampant in the articulation of infants during L1 acquisition for all languages.  So an alternative explanation to markedness calculations could be that children don't really acquire devoicing.  They suppress it.  However, that runs against the mainstream view that Markedness can somehow explain language acquisition and phonological universals.

Offline panini

  • Linguist
  • ***
  • Posts: 194
Re: Phonology is not grounded in phonetics
« Reply #6 on: June 04, 2015, 10:39:53 PM »
Actually, H&R repudiate markedness resoundingly, as well as implicational universals. So they pretty much entirely reject the classical SPE approach to "naturalness". As to how children learn that there is final devoicing, the facts of German are so crystal clear that you couldn't not learn the rule. Likewise, the facts of Lezgian and Somali are so crystal clear that there is a rule of final voicing, that you could not possible fail to learn that there is such a rule in those languages.

There are two circumstances where final devoicing (or final voicing) becomes hard to learn from simple observation of the facts. One is when the data is contradictory to the point that it is unclear exactly when there is final devoicing, for instance in dialects which deletes some final vowels and some words simply don't undergo devoicing (e.g. loan words), then it may be that the rule drops out of the grammar -- this is the end of the rule. The other circumstance is that it is not always clear when there is an actual devoicing rule. Often, the physiological factors which are the precursor to final devoicing don't actually neutralized the voicing opposition, and all you have is a decrease in the amplitude of voicing in final position. If the phonetics of final voiced consonants is subtle enough, a child may actually miss the acoustic cues that there is a difference between a voiceless obstruent and a mostly-devoiced obstruent, and they may mistakenly (from the adult perspective) assume that there is a phonological rule of final devoicing which turns /b/ into [p], as opposed to a phonetic implementation strategy that turns / b / into [ b̥ ].

Offline Copernicus

  • Linguist
  • ***
  • Posts: 61
  • Country: us
    • Natural Phonology
Re: Phonology is not grounded in phonetics
« Reply #7 on: June 05, 2015, 12:48:44 AM »
Actually, H&R repudiate markedness resoundingly, as well as implicational universals. So they pretty much entirely reject the classical SPE approach to "naturalness". As to how children learn that there is final devoicing, the facts of German are so crystal clear that you couldn't not learn the rule. Likewise, the facts of Lezgian and Somali are so crystal clear that there is a rule of final voicing, that you could not possible fail to learn that there is such a rule in those languages.
From my cursory reading, it appeared to me that they were working within the SPE paradigm.  I'm glad to hear that they repudiated markedness, because it was never very clear how it was supposed to explain anything at all about either language learning or implicational universals, which are still pretty much valid generalizations about child pronunciations, phonemic inventories, and the direction of phonological change.  What is really interesting, though, is what it could possibly mean to "learn" a rule like final devoicing.  Does that mean become aware of its role in motivating well-formedness judgments?  Is it about any kind of awareness, or is it about being able to pronounce final voiceless obstruents?  The term "learn" can be quite vague in that respect.

But let's consider the mature German speaker learning English, because that is a really interesting case in point.  Germans learning English are told about final voiced consonants in English.  They hear them demonstrated, and they are drilled in pronouncing them.  So let's say that they learn that English has final voiced obstruents, but quite a few still never "learn" that in the sense that they can stop themselves from devoicing those consonants when they speak English.  The same is true of other languages with final devoicing.  That particular phonological "rule" is actually something like a speech impediment when it comes to speaking English, and truly "learning" English entails suppression of devoicing. 

Now lets consider a different "phonological rule"--the one that requires plural stems in certain English nouns to replace final voiceless consonants with voiced ones.  Of course, we know that this is not a so-called "low level" rule.  I'm talking about nouns like knife, leaf, hoof, house, etc.  What is ironic is that the German speaker has to "learn" that morphologically-governed substitution in a very different fashion--by actually learning to make the substitution when forming plurals.  So a typical German or Russian might well mispronounce /nayvz/ as [nayfs], but to acquire correct pronunciation entails (1) suppression of devoicing and (2) rote memorization of morpheme-governed substitution.  What is interesting is that Germans and Russians have to succeed at both in order to stop themselves form mispronouncing the plural forms.  That is, they have to learn what to try to pronounce, on the one hand, and suppress their mispronunciation of it, on the other.  Yet modern generative phonology still treats both types of rules as essentially "phonology".  Earlier theories made a very deliberate distinction between the two types of rules, and that included Sapir, whom Chomsky and Halle claimed to be following in the footsteps of.  Trubetzkoy even said that they belonged to separate components of grammar.

Quote
There are two circumstances where final devoicing (or final voicing) becomes hard to learn from simple observation of the facts. One is when the data is contradictory to the point that it is unclear exactly when there is final devoicing, for instance in dialects which deletes some final vowels and some words simply don't undergo devoicing (e.g. loan words), then it may be that the rule drops out of the grammar -- this is the end of the rule...
Right, but I look at this case just a little bit differently.  Linguists look at data and analyze patterns of distribution.  Language learners are faced with two very different problems--what sounds to try to pronounce and how to coordinate the articulation of those sounds.  It is not just about patterns of distribution.  It is about very different cognitive tasks.  So it is possible for a younger generation to mimic the pronunciation of elders, but use very different performance strategies to achieve it.  This is what Roman Jakobson referred to as "rephonologization".

Quote
The other circumstance is that it is not always clear when there is an actual devoicing rule. Often, the physiological factors which are the precursor to final devoicing don't actually neutralized the voicing opposition, and all you have is a decrease in the amplitude of voicing in final position. If the phonetics of final voiced consonants is subtle enough, a child may actually miss the acoustic cues that there is a difference between a voiceless obstruent and a mostly-devoiced obstruent, and they may mistakenly (from the adult perspective) assume that there is a phonological rule of final devoicing which turns /b/ into [p], as opposed to a phonetic implementation strategy that turns / b / into [ b̥ ].
True, but do you have any evidence that this is a significant factor in the behavior of L1 learners?  Have you looked at patterns of articulation in longitudinal studies?  I think you'll find that there are some rather spectacular substitutions occurring in the early stages of language acquisition.  If one proposes a theory of phonological acquisition, one ought to be able to account for them.

Offline panini

  • Linguist
  • ***
  • Posts: 194
Re: Phonology is not grounded in phonetics
« Reply #8 on: June 05, 2015, 09:39:17 AM »
What is really interesting, though, is what it could possibly mean to "learn" a rule like final devoicing.  Does that mean become aware of its role in motivating well-formedness judgments?  Is it about any kind of awareness, or is it about being able to pronounce final voiceless obstruents?
It means that you learn (since you don't know a priori) that there is in the grammar of German a rule [-sonorant] → [-voice]/__σ]. The basis for learning that is that some stems end with voiced obstruents and others end with voiceless obstruents, but when the voiced obstruents are syllable final (i.e. not followed by a vowel), the obstruent changes voicing. And that is it. That rule could play a role in metalinguistic judgment where you ask a German speaker if they can say [bund] and they say "No", but that doesn't automatically follow from the speaker having learned the rule. It does not entail that German speakers are unable to pronounce words with final voiced obstruents. In the Hale & Reiss "minimalist" approach, grammars are not held to be entirely responsible for all aspects of linguistic behavior: they encourage and even demand independent investigation of metalinguistic knowledge.
Quote
Yet modern generative phonology still treats both types of rules as essentially "phonology".
That is correct, and that is the way it should be. There are many, many ways to subclassify phonological rules in terms of speaker behavior with respect to the rule, or even in terms of formal properties of rules (such as "rules that operate between adjacent segments" or "rules that operate between non-adjacent segments"). In the latter case there is an actually interesting question about the nature of phonological computations, since we need to find the proper means of saying "applies between non-adjacent segments". That's because stating that condition correctly is, by definition, what it means to have a theory of phonological computations. Knowing how a given rule relates to second language acquisition or speech errors is not part of the theory of grammatical computations, it is part of the theory of psycholinguistics (in the broadest sense).
Quote
Linguists look at data and analyze patterns of distribution.  Language learners are faced with two very different problems--what sounds to try to pronounce and how to coordinate the articulation of those sounds.  It is not just about patterns of distribution.
I disagree on what language learners are faced with. I agree that many linguists take statistical patterns of distribution to be part of the object of study, but there is no justification for putting distributional patterns in a grammar. I am reasonably confident though not absolutely certain that H&R also reject reifying distributional observations in grammar (for instance, MSCs which they do specifically reject). For the purpose of this discussion, we can reject distributional patterns as being relevant to the nature of a grammar.

What is relevant is the resolution of contradiction. A child exposed to (American) English will hear that the name of the stuff filling the lakes is [waɾɹ̩], and they will learn that their dictionaries should include the symbolic sequence /waɾɹ̩/ for H2O. They will parse out [tɹʷɪm] from various instances of that verb root, such as [tɹʷɪm], [tɹʷɪmd], [tɹʷɪmz], [tɹʷɪmɪŋ], and likewise register the root as /tɹʷɪm/. They face a superficial contradiction when they encounter "rot" = [rɑt], [rɑts], [rɑɾɪŋ], [rɑʔn̩], that the root could be any of /rɑt, rɑɾ, rɑʔ/. When they learn that there are rules of glottalization and flapping, the contradiction evaporates, and the lexical thing to be learned is /rɑt/.

I am deliberately bypassing the question of the position of precise pronunciation, exemplified in my transcription [rɑt], which in my dialect is between [a] and [ɑ]. I think H&R do subscribe to the SPE "perceptually self-evident" theory of surface values, i.e. features are a priori phonetic descriptions so that you know the feature analysis of a string when you hear it. I seem to disagree with them on that point, and I'd rather keep this about the original question.
Quote
True, but do you have any evidence that this is a significant factor in the behavior of L1 learners?  Have you looked at patterns of articulation in longitudinal studies?  I think you'll find that there are some rather spectacular substitutions occurring in the early stages of language acquisition.  If one proposes a theory of phonological acquisition, one ought to be able to account for them.
Unfortunately, research in that area is at such a primitive state that there is nothing one can say about how phonetic variation becomes phonological variation. As H&R point out, infants don't have adult-style control over their motor systems, and you can't tell from impressionistic transcriptions of acoustic outputs what the pre-motor representation of various tokens of "lamb" is. As with all accounts of historical change, there is a great volume of speculation and few prospects for empirically testing these claims. Currently popular "biased coin toss" theories of phonological change likewise assume that ambiguous signals may tend to be resolved in a particular directions a small percent of the time which eventually leads to sound change, and this also hasn't been verified. My opinion is that the root of the research problem lies in data reduction techniques in child language studies, and adult language studies for that matter.

Offline panini

  • Linguist
  • ***
  • Posts: 194
Re: Phonology is not grounded in phonetics
« Reply #9 on: June 05, 2015, 10:28:02 AM »
I thought of a nice clear example of the problem. It has been believed by some that there is a rule in English devoicing /l/ after /s/, exemplified by slip, which some claim has the surface form [sl̥ɪp]. My understanding of the phonetic literature on this matter is that there is not any such rule, and instead, linguists have misinterpreted the extent / timing of glottal opening associated with /s/ and other voiceless consonants, where /s/ and aspirated have a large glottal opening which then overlaps /l/ on the left, leading linguists to sometimes think that there is a rule of devoicing. It is possible that at some point, the raw acoustic data will change enough and be reinterpreted so that this phenomenon is phonologized and English develops a surface segment [l̥]. But before we can detect that happening, we would need to have a better understanding of the difference between physical outputs or articulation, which are not representations, and phonological representations – and we would also need to resolve the question of whether the mind has such a thing as a "phonetic representation". Longitudinal studies of those questions can't be answered until we have some hope of answering those questions non-longitudinally.

Offline Copernicus

  • Linguist
  • ***
  • Posts: 61
  • Country: us
    • Natural Phonology
Re: Phonology is not grounded in phonetics
« Reply #10 on: June 05, 2015, 02:21:59 PM »
What is really interesting, though, is what it could possibly mean to "learn" a rule like final devoicing...
It means that you learn (since you don't know a priori) that there is in the grammar of German a rule [-sonorant] → [-voice]/__σ]. The basis for learning that is that some stems end with voiced obstruents and others end with voiceless obstruents, but when the voiced obstruents are syllable final (i.e. not followed by a vowel), the obstruent changes voicing. And that is it. That rule could play a role in metalinguistic judgment where you ask a German speaker if they can say [bund] and they say "No", but that doesn't automatically follow from the speaker having learned the rule. It does not entail that German speakers are unable to pronounce words with final voiced obstruents. In the Hale & Reiss "minimalist" approach, grammars are not held to be entirely responsible for all aspects of linguistic behavior: they encourage and even demand independent investigation of metalinguistic knowledge.
I am highly skeptical of the standard view that language learners acquire grammars in the generative sense, i.e. specialized sets of instructions for calculating well-formedness.  Rather, I think that the primary goal is performance-oriented and that intuitions of grammaticality are derivative of them.  Regardless of maturity, a language learner is always concerned with mastery of linguistic skills.  So it is really an open question whether there exists a devoicing rule in the sense that you describe it here. And I think that you have oversimplified the problem.  You know as well as I that there is a discrepancy between the sounds that speakers perceive and those that exist superficially in the acoustic signal.  And there is a discrepancy between what speakers think they are articulating and the actual articulatory production that they execute.  So it is really difficult to interpret what a "no" or "yes" response to the "Can you say..." question really means from a linguistic perspective.  Our theories, of course, bias our interpretations.  H&R are articulating a standard view of the role of grammars in generative theory, of course--one that has been remarkably resistant to criticism over the past half a century.  (I find that depressing, but understandable.)

Let me reiterate the basic dichotomy that I think you are missing, albeit for ideological reasons.  The language learner faces two very different questions in learning a new phonological system:
  • What sounds does the speaker intend to pronounce?
  • How is the speaker articulating those sounds?
The first question is the basis for phonological representation.  The second is the basis for motor coordination of the speech tract.  These are just two separate cognitive tasks that every language learner has to master.  Intuitions of well-formedness--what the speaker ought to be trying to say--are of secondary importance.  Well-formedness can be seen as a sense of perfection of performance, just as we have a sense of perfection in other forms of behavior.  That is, well-formedness intuitions are not special to linguistic behavior.  They are a generalized cognitive function that arise for every form of coordinated, intentional behavior.

Quote from: panini
Quote
Yet modern generative phonology still treats both types of rules as essentially "phonology".
That is correct, and that is the way it should be...
We disagree quite strongly on that, but we agree on what the standard view is within the generative paradigm.

Quote from: panini
...There are many, many ways to subclassify phonological rules in terms of speaker behavior with respect to the rule, or even in terms of formal properties of rules (such as "rules that operate between adjacent segments" or "rules that operate between non-adjacent segments"). In the latter case there is an actually interesting question about the nature of phonological computations, since we need to find the proper means of saying "applies between non-adjacent segments". That's because stating that condition correctly is, by definition, what it means to have a theory of phonological computations. Knowing how a given rule relates to second language acquisition or speech errors is not part of the theory of grammatical computations, it is part of the theory of psycholinguistics (in the broadest sense).
I have no trouble thinking of prosodic groupings as the proper level at which to capture non-adjacency, but generative theory blends phonology with morphology, which I think is a fundamental mistake.  Morphology deals with the first question above, and phonology with the second.  If you are going to bring up psycholinguistics, let's not forget that generative grammar is a purely psycholinguistic approach to language.  Unfortunately, it is a myopic approach in that it has a psycholinguistic theory of linguistic competence but lacks a corresponding theory of linguistic performance.  One can't have one without the other, although being a generative linguist seems to commit one to believing that one can.

Quote from: panini
Quote
Linguists look at data and analyze patterns of distribution.  Language learners are faced with two very different problems--what sounds to try to pronounce and how to coordinate the articulation of those sounds.  It is not just about patterns of distribution.
I disagree on what language learners are faced with. I agree that many linguists take statistical patterns of distribution to be part of the object of study, but there is no justification for putting distributional patterns in a grammar. I am reasonably confident though not absolutely certain that H&R also reject reifying distributional observations in grammar (for instance, MSCs which they do specifically reject). For the purpose of this discussion, we can reject distributional patterns as being relevant to the nature of a grammar.
FTR, I said nothing here about putting distributional patterns in the "grammar".  We have a disagreement on what the "grammar" is.  If you ever engage in a field study of a language, those patterns become part of the record that you analyze, because you want to be very careful about imposing your own linguistic biases on the subject matter.  All I meant to say was that we should be wary of confusing language learning with linguistic analysis. 

Quote from: panini
What is relevant is the resolution of contradiction. A child exposed to (American) English will hear that the name of the stuff filling the lakes is [waɾɹ̩], and they will learn that their dictionaries should include the symbolic sequence /waɾɹ̩/ for H2O. They will parse out [tɹʷɪm] from various instances of that verb root, such as [tɹʷɪm], [tɹʷɪmd], [tɹʷɪmz], [tɹʷɪmɪŋ], and likewise register the root as /tɹʷɪm/. They face a superficial contradiction when they encounter "rot" = [rɑt], [rɑts], [rɑɾɪŋ], [rɑʔn̩], that the root could be any of /rɑt, rɑɾ, rɑʔ/. When they learn that there are rules of glottalization and flapping, the contradiction evaporates, and the lexical thing to be learned is /rɑt/.
I get what you are trying to say here, but the interesting thing is that language learners do get a lot of things wrong.  Linguistic theory ought to be able to account for the types of errors that they make, especially when errors crop up as patterns in stages of development.  However, I still consider your thought experiment here to be flawed in a number of ways, not the least of which is that L1 learners produce flaps and glottal stops at a very early age.  And they produce them in interesting patterns which suggest that they have phonological rules that do not seem to be based on observation of adult articulation.  Rather, they seem to have more to do with the difficulty of replicating adult pronunciation.  IOW, phonology isn't all about recognition of patterns in adult behavior.  It is also very much about suppression of misarticulation.

Let me propose a different way of looking at it.  Suppose that devoicing, flapping, premature glottal closure, etc., are all natural tendencies that afflict the production of phonetic targets.  That makes them equivalent to speech impediments, but only when they impede desired phonetic output.  Hence, suppression of those tendencies would be important only when they contradict desired phonetic output.  If those impediments arise naturally in the process of attempted articulation, then they are not learned.  At least part, if not all, of the "phonological system" that emerges is essentially the set of unsuppressed natural tendencies to misarticulate.  Except that they aren't technically misarticulations, if they don't impede pronunciation.  Indeed, they aid desired articulation.  And, BTW, suppression of undesirable movements is exactly what happens when people learn any kind of muscular coordination, so this is not really as radical an idea as it may sound.  So I haven't rejected your notion of observation-based "contradiction", but I have rejected your idea that operations such as devoicing, flapping, and glottal substitution are acquired or learned on the basis of observation.

Quote from: panini
Quote
True, but do you have any evidence that this is a significant factor in the behavior of L1 learners?  Have you looked at patterns of articulation in longitudinal studies?  I think you'll find that there are some rather spectacular substitutions occurring in the early stages of language acquisition.  If one proposes a theory of phonological acquisition, one ought to be able to account for them.
Unfortunately, research in that area is at such a primitive state that there is nothing one can say about how phonetic variation becomes phonological variation. As H&R point out, infants don't have adult-style control over their motor systems, and you can't tell from impressionistic transcriptions of acoustic outputs what the pre-motor representation of various tokens of "lamb" is. As with all accounts of historical change, there is a great volume of speculation and few prospects for empirically testing these claims. Currently popular "biased coin toss" theories of phonological change likewise assume that ambiguous signals may tend to be resolved in a particular directions a small percent of the time which eventually leads to sound change, and this also hasn't been verified. My opinion is that the root of the research problem lies in data reduction techniques in child language studies, and adult language studies for that matter.
I do think that David Stampe produced some rather brilliant analyses of some known longitudinal studies back in the 1960s, but he was starting with the assumptions I've taken here--that what was going on was largely evidence of a failure to suppress massive amounts of misarticulations.  What he showed was that the careful records kept on changing articulation revealed patterns of global misarticulation that changed on the basis of selective suppression.  What emerged in his work was a concept of phonological system as a residual set of benign articulatory constraints on articulation.  Unfortunately, his work coincided with the emergence of generative phonology and its concommitant rejection of the dividing line between phonology and morphophonology--a line that he scrupulously maintained.  So he pretty much became background noise, especially since he had nothing much to say about the rest of the grammar.

I thought of a nice clear example of the problem. It has been believed by some that there is a rule in English devoicing /l/ after /s/, exemplified by slip, which some claim has the surface form [sl̥ɪp]. My understanding of the phonetic literature on this matter is that there is not any such rule, and instead, linguists have misinterpreted the extent / timing of glottal opening associated with /s/ and other voiceless consonants, where /s/ and aspirated have a large glottal opening which then overlaps /l/ on the left, leading linguists to sometimes think that there is a rule of devoicing. It is possible that at some point, the raw acoustic data will change enough and be reinterpreted so that this phenomenon is phonologized and English develops a surface segment [l̥]. But before we can detect that happening, we would need to have a better understanding of the difference between physical outputs or articulation, which are not representations, and phonological representations – and we would also need to resolve the question of whether the mind has such a thing as a "phonetic representation". Longitudinal studies of those questions can't be answered until we have some hope of answering those questions non-longitudinally.
Longitudinal studies exist in the literature.  The problem is that generative phonology does not have any way to explain the data.  However, I would treat the progressive devoicing of /l/ in slip as essentially the same process that forces English speakers to devoice initial obstruent clusters, thus preventing the occurrence of English minimal pairs like /sbIl/ and /spIl/.  Of course, the non-existence of mixed-voice obstruent clusters is sometimes treated as a rule-based, and sometimes as a static-based, constraint on representation, which seems to miss a generalization.  However, that is not a problem with generative phonology, but with all alternation-based theories of phonology.  The flaw in thinking goes all the way back to Baudouin de Courtenay's seminal conception of the phonology/morphophonology divide as an alternational dichotomy.  He had no concept of phonological derivation, and subsequent phonologists stuck with his paradigm.  In modern phonological theory, alternations motivate the concept of rules, but constraints on collocations of sounds tend to be seen as static constraints on underlying or superficial representation.  A phonological theory based on automatic substitutions would not need static constraints to explain any phonemic or phonetic patterns.
« Last Edit: June 05, 2015, 02:24:28 PM by Copernicus »

Offline panini

  • Linguist
  • ***
  • Posts: 194
Re: Phonology is not grounded in phonetics
« Reply #11 on: June 06, 2015, 10:41:55 AM »
Copernicus, there is one essential question that has to first be answered, namely the ontological status of "phonology". Without prior understanding of and agreement on what a term refers to, it is impossible to compare competing theories of that object. A feature of some instantiations of Minimalism is that they consider word order and word-formation to be "phonology", which I find incomprehensible in the context of how phonologists actually use and have used the term "phonology". A less surprising (mis)-interpretation of "phonology" is the taxonomically-inspired view that "phonology" only refers to allophonic processes. What you are describing is phonetics; I would certainly agree that phonetics should be phonetically grounded.

There are a few people who appear to actually deny that there is any such a thing as phonology, in the sense that has been used for the past 50 years and the way I and H&R use it -- I'm referring to folks like Port and Ohala. I have never been able to pin them down on how they handle the facts that a phonology handles. The unfortunate problem is that their understanding of what a formal generative phonology does is exemplified by SPE, and the vast majority of phonologists now accept that the SPE analysis of English was founded on many unjustified assumptions: there is no case to be made for grammatically deriving the vowel alternations in obscene ~ obscenity. Phonology-deniers correctly hold that you just learn obscene and obscenity as separate lexical items. When they deny that there is a linguistic component "phonology", as far as I can tell, they are talking about a limited subset of things that had been called phonology, which is now generally agreed is not actually part of phonology.

There are numerous very-clear cases, and the one that I prefer to point to as an exemplar of what a phonology does, is the phonology of the verb in Classical Arabic. Mike Brame in his MIT dissertation does a skillful job of analyzing the complex pattern of alternations, especially focusing on the status of glides, and I hold that up as an iconic example.

One response to Brame's analysis is to simply deny the label, i.e. one can say "that's just morphology" (since one finds evidence for underlying forms and alternations by inspection of the verbal paradigm). Some people talk about "morphophonemics" as distinct from "phonology", but I don't know what that entails in terms of a model of grammatical computation -- is "phonology" the same as postlexical phonology in LP terms and "morphophonology" the same as lexical phonology? What then is "phonetics"?

Phonology is a specific part of a grammar, which underlies (but does not fully determine) a speakers ability to use language. Syntax is another part of a grammar. If you accept that there is such a thing as what I call a phonology and just deny that it should be called "phonology", then what thing should be called "phonology", what thing should be called phonetics, and what should I call the thing that I'm calling "phonology"? If you deny that there is a phonology, in the sense that I use it, are you just denying phonology, or are you denying grammar?

Offline Copernicus

  • Linguist
  • ***
  • Posts: 61
  • Country: us
    • Natural Phonology
Re: Phonology is not grounded in phonetics
« Reply #12 on: June 06, 2015, 02:27:42 PM »
Copernicus, there is one essential question that has to first be answered, namely the ontological status of "phonology". Without prior understanding of and agreement on what a term refers to, it is impossible to compare competing theories of that object. A feature of some instantiations of Minimalism is that they consider word order and word-formation to be "phonology", which I find incomprehensible in the context of how phonologists actually use and have used the term "phonology". A less surprising (mis)-interpretation of "phonology" is the taxonomically-inspired view that "phonology" only refers to allophonic processes. What you are describing is phonetics; I would certainly agree that phonetics should be phonetically grounded.
I think that I disagree with the statement I have put in boldface.  A linguistic term like "phonology" only makes sense within the context of a general linguistic framework.  If you accept the premises of generative linguistics, especially regarding the nature of "grammar", then it may well make sense to confuse phonology with morphophonology, not to mention other areas of grammatical description.  I remember quite clearly how we used to try to develop parallel formalisms in phonology and syntax within the generative framework--to strive for that elusive linguistic ToE, so to speak.  I no longer think in that way, because my functional-behavioral view of language drives me in a different direction. 

Phonology, IMO, only makes sense as an aspect of sensorimotor behavior--the coordination of articulatory gestures, to be more precise.  You characterize my description as only referring to "allophonic processes", but I don't accept the theoretical baggage inherent in the old structuralist term "allophone".  In particular, I take phonological representation to be quite a bit more abstract than what traditional phonemic theory would deem allophonic variation.  That was also true for Sapir and Baudouin, by the way.  Their concept of phonology did not reject phonemic overlap, but it certainly was nowhere near as abstract as the so-called "systematic phonemic" level that SPE ended up endorsing.  For me, phonetics is just the study of articulatory and acoustic properties of speech.  Phonology should be construed as a psychological model of the production and perception of speech sounds.  That is quite different from merely studying properties of speech.

There are a few people who appear to actually deny that there is any such a thing as phonology, in the sense that has been used for the past 50 years and the way I and H&R use it -- I'm referring to folks like Port and Ohala. I have never been able to pin them down on how they handle the facts that a phonology handles...
Oh, but I agree with much of what you say here.  Ohala, in particular, has no real theory of phonology.  He mixes it with the field of phonetics.

...The unfortunate problem is that their understanding of what a formal generative phonology does is exemplified by SPE, and the vast majority of phonologists now accept that the SPE analysis of English was founded on many unjustified assumptions: there is no case to be made for grammatically deriving the vowel alternations in obscene ~ obscenity. Phonology-deniers correctly hold that you just learn obscene and obscenity as separate lexical items. When they deny that there is a linguistic component "phonology", as far as I can tell, they are talking about a limited subset of things that had been called phonology, which is now generally agreed is not actually part of phonology.
I think that your use of the term "phonology" begs the question of how we ought to use the term.  We shouldn't ultimately care how people define the term as why they define it the way they do.  Generative linguists who wish to deny that obscene~obscenity is phonological in nature need to have a clear theoretical basis for the denial.  Generativists are all over the map on the subject, because their theoretical framework is inherently devoid of a dividing line between what I call "phonology" and "morphophonology".  Working within such a fuzzy conception of language, people know that they ought to be drawing a line somewhere, but they can't seem to find a good solid place to draw it.

There are numerous very-clear cases, and the one that I prefer to point to as an exemplar of what a phonology does, is the phonology of the verb in Classical Arabic. Mike Brame in his MIT dissertation does a skillful job of analyzing the complex pattern of alternations, especially focusing on the status of glides, and I hold that up as an iconic example.
I wouldn't question that Brame's data or analysis of it is clear.  What I would question is whether it ought to be treated as phonological, rather than morphological, analysis.  Don't forget that the man who coined the term "alternation" (our old friend Baudouin de Courtenay) held that there were two fundamentally distinct types of alternation--phonological (physiophonetic) and morphophonological (psychophonetic).  And his phonological alternations could be between two phonetic units that were superficially distinct phonemes, depending on where they fell in a word or morpheme.  Sapir said roughly the same thing in Language, when he said that there was a fundamental difference in the s/z alternation in books and bags and the s/z alternation in house and to house.  (Sapir was not quite as clear as Baudouin on what he meant by that claim, however.)  SPE conflated the two types of alternations and gave both of them the rubric "phonology".  No generative linguist, to my knowledge, has really understood Baudouin's original 19th century insight, and that insight is what spawned the birth of phonology.  Ironically, Halle and Chomsky killed off Sapir by embracing him so closely.  They named SPE after Sapir's famous paper and claimed they were following in his footsteps, when, in fact, they were denying a categorical distinction that he explicitly called for.

One response to Brame's analysis is to simply deny the label, i.e. one can say "that's just morphology" (since one finds evidence for underlying forms and alternations by inspection of the verbal paradigm). Some people talk about "morphophonemics" as distinct from "phonology", but I don't know what that entails in terms of a model of grammatical computation -- is "phonology" the same as postlexical phonology in LP terms and "morphophonology" the same as lexical phonology? What then is "phonetics"?
I've defined phonetics for you as the study of sound properties, which is logically distinct from the study of how one produces and perceives speech.  As for the plethora of attempts to describe the phonemic/morphophonemic dichotomy within a generative framework, I would opine that it has been a massive failure.  The reason is that there is ultimately no psychological "grammar" in a generative sense.  That conception of a linguistic system is deeply flawed.  We do have intuitions of well-formedness, but they should not serve as the sole basis for a description of a linguistic system.  Linguistic theory must make a distinction between the strings of sounds that we associate with words and morphemes in memory and the mental program that governs our articulation of those sounds.  That is too fundamental a dichotomy for linguistic theory to ignore, yet we train budding young linguists to ignore it.

Phonology is a specific part of a grammar, which underlies (but does not fully determine) a speakers ability to use language. Syntax is another part of a grammar. If you accept that there is such a thing as what I call a phonology and just deny that it should be called "phonology", then what thing should be called "phonology", what thing should be called phonetics, and what should I call the thing that I'm calling "phonology"? If you deny that there is a phonology, in the sense that I use it, are you just denying phonology, or are you denying grammar?
I fully embrace the distinction between phonology, morphology, and syntax.  Syntax governs our ability to stitch lexical units together into phrasal units.  Morphology governs alterations to the phonemic string associated with those lexical units in a syntactic context.  Phonology governs the articulation of prosodically-grouped phonetic units.  Phonetics provides a description of the acoustic and articulatory properties of speech sounds such that we can come up with a coherent way of describing phonological processes.  I hope that that clarifies how I use the terms.
« Last Edit: June 06, 2015, 02:36:34 PM by Copernicus »