New JASA paper

Shaw, J. A., & Tyler, M. D. (2020). Effects of vowel coproduction on the timecourse of tone recognition. The Journal of the Acoustical Society of America147(4), 2511-2524. pdf

Abstract: Vowel contrasts tend to be perceived independently of pitch modulation, but it is not known whether pitch can be perceived independently of vowel quality. This issue was investigated in the context of a lexical tone language, Mandarin Chinese, using a printed word version of the visual world paradigm. Eye movements to four printed words were tracked while listeners heard target words that differed from competitors only in tone (test condition) or also in onset consonant and vowel (control condition). Results showed that the timecourse of tone recognition is influenced by vowel quality for high, low, and rising tones. For these tones, the time for the eyes to converge on the target word in the test condition (relative to control) depended on the vowel with which the tone was coarticulated with /a/ and /i/ supporting faster recognition of high, low, and rising tones than /u/. These patterns are consistent with the hypothesis that tone-conditioned variation in the articulation of /a/ and /i/ facilitates rapid recognition of tones. The one exception to this general pattern—no effect of vowel quality on falling tone perception—may be due to fortuitous amplification of the harmonics relevant for pitch perception in this context.

Talk at BLS

Talk at the Berkeley Linguistics Society workshop “Phonological representations: at the crossroad between gradience and categoricity”  Feb 7-8 was entitled: Finding phonological structure in vowel confusions across English accents. The talk draws a connection between some collaborative work on cross-accent speech perception (Shaw et al. 2018. 2019) and contrastive feature hierarchies, in the sense of Dresher (2009).

The slides are available here.

 

 

 

New paper in Frontiers

“Spatially Conditioned Speech Timing: Evidence and Implications” is part of the Frontiers research topic “Models and Theories of Speech Production”. The paper provides evidence that the temporal coordination of articulatory gestures in speech is sensitive to the moment-by-moment location of speech organs (tongue, lips), a result which has implications for mechanisms of speech motor control, including the balance between feed-forward and state-based feedback control.

Abstract:

Patterns of relative timing between consonants and vowels appear to be conditioned in part by phonological structure, such as syllables, a finding captured naturally by the two-level feedforward model of Articulatory Phonology (AP). In AP, phonological form – gestures and the coordination relations between them – receive an invariant description at the inter-gestural level. The inter-articulator level actuates gestures, receiving activation from the inter-gestural level and resolving competing demands on articulators. Within this architecture, the inter-gestural level is blind to the location of articulators in space. A key prediction is that intergestural timing is stable across variation in the spatial position of articulators. We tested this prediction by conducting an Electromagnetic Articulography (EMA) study of Mandarin speakers producing CV monosyllables, consisting of labial consonants and back vowels in isolation. Across observed variation in the spatial position of the tongue body before each syllable, we investigated whether inter-gestural timing between the lips, for the consonant, and the tongue body, for the vowel, remained stable, as is predicted by feedforward control, or whether timing varied with the spatial position of the tongue at the onset of movement. Results indicated a correlation between the initial position of the tongue gesture for the vowel and C-V timing, indicating that inter-gestural timing is sensitive to the position of the articulators, possibly relying on somatosensory feedback. Implications of these results and possible accounts within the Articulatory Phonology framework are discussed.

Shaw, J. A., & Chen, W.-r. (2019). Spatially Conditioned Speech Timing: Evidence and Implications. Frontiers in psychology, 10(2726). doi:10.3389/fpsyg.2019.02726

AMP talk & poster: Oct 11

I’ll be representing a couple of research projects at the Annual Meeting on Phonology (AMP).

Titles and links to abstracts are below:

Poster: Kevin Tang (University of Florida) and Jason Shaw (Yale University). Sentence prosody leaks into the lexicon: evidence from Mandarin Chinese

Talk: Shigeto Kawahara (Keio University), Jason Shaw (Yale University) and Shinichiro Ishihara (Lund University). Do Japanese speakers always prosodically group wh-elements and their licenser? Implications for Richards’ (2010) theory of wh-movement 

USC colloquium talk: Sept 23

Title and abstract from colloquium talk at USC, Sept 23, 2019:

The temporal geometry of phonology

 Abstract: Languages differ in how the spatial dimensions of the vocal tract, i.e., constriction location/degree, are organized to express phonological form. Languages also differ in temporal geometry, i.e., how sequences of vocal tract constrictions are organized in time. The most comprehensive accounts of temporal organization to date have been developed within the Articulatory Phonology framework, where phonological representations take the form of temporally coordinated action units, known as gestures (Browman & Goldstein, 1986; Gafos & Goldstein, 2012; Goldstein & Pouplier, 2014). A key property of Articulatory Phonology is the feed-forward control of articulation by ensembles of temporally organized gestures.

In this talk, I first make explicit how the temporal geometry of phonology conditions language-specific patterns of phonetic variation. Through computational simulation, I illustrate how distinct temporal geometries for syllable types and segment types (complex segments vs. segment sequences) structure phonetic variation. Model predictions are tested on experimental phonetic data from English (Shaw, Durvasula, & Kochetov, 2019; Shaw & Gafos, 2015), Arabic (Shaw, Gafos, Hoole, & Zeroual, 2011), Japanese (Shaw & Kawahara, 2018) and Russian (Kochetov, 2006; Shaw et al., 2019). Phonological structure formalized as ensembles of local coordination relations between articulatory gestures (Gafos, 2002) and implemented in stochastic models (Gafos, Charlow, Shaw, & Hoole, 2014; Shaw & Gafos, 2015) reliably describes patterns of temporal variation in these languages. These results crucially rely on feed-forward control of gestures. I close with data from Mandarin Chinese which presents a potential challenge to strict feed-forward control. Unexpectedly, inter-gestural coordination in Mandarin appears to be sensitive to the spatial position of articulators—gestures begin earlier in time just when they are farther in space from their target. To account for the Mandarin data, I explore the possibility that gestures are temporal organized according to spatial targets, which requires a combination of feedback and feedforward control, and discuss some implications of the proposal for speech perception and sound change.

References

Browman, C., & Goldstein, L. (1986). Towards an Articulatory Phonology. Phonology Yearbook, 3, 219-252.

Gafos, Charlow, S., Shaw, J. A., & Hoole, P. (2014). Stochastic time analysis of syllable-referential intervals and simplex onsets. Journal of Phonetics, 44, 152-166.

Gafos, A. (2002). A grammar of gestural coordination. Natural Language and Linguistic Theory, 20, 269-337.

Gafos, A., & Goldstein, L. (2012). Articulatory representation and organization. In A. C. Cohn, C. Fougeron, & M. K. Huffman (Eds.), The Oxford Handbook of Laboratory Phonology (pp. 220-231).

Goldstein, L., & Pouplier, M. (2014). The Temporal Organization of Speech. The Oxford handbook of language production, 210-240.

Kochetov, A. (2006). Syllable position effects and gestural organization: Articulatory evidence from Russian. In L. G. Goldstein, D. H. Whalen, & C. Best (Eds.), Laboratory Phonology 8 (pp. 565-588). Berlin: de Gruyter.

Shaw, J. A., Durvasula, K., & Kochetov, A. 2019. The temporal basis of complex segments. In Sasha Calhoun, Paola Escudero, Marija Tabain & Paul Warren (eds.) Proceedings of the 19th International Congress of Phonetic Sciences, Melbourne, Australia 2019 (pp. 676-680). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

Shaw, J. A., & Gafos, A. I. (2015). Stochastic Time Models of Syllable Structure. PLoS One, 10(5), e0124714 0124711-0124736.

Shaw, J. A., Gafos, A. I., Hoole, P., & Zeroual, C. (2011). Dynamic invariance in the phonetic expression of syllable structure: a case study of Moroccan Arabic consonant clusters. Phonology, 28(3), 455-490.

Shaw, J. A., & Kawahara, S. (2018). The lingual articulation of devoiced /u/ in Tokyo Japanese. Journal of Phonetics, 66, 100-119. doi:https://doi.org/10.1016/j.wocn.2017.09.007

ICPHS proceedings out

2019 ICPHS proceedings papers are now available online: https://assta.org/proceedings/ICPhS2019/

Mine are also below as pdfs.

Shaw, J. A., Durvasula, K., & Kochetov, A. 2019. The temporal basis of complex segments. In Sasha Calhoun, Paola Escudero, Marija Tabain & Paul Warren (eds.) Proceedings of the 19th International Congress of Phonetic Sciences, Melbourne, Australia 2019 (pp. 676-680). Canberra, Australia: Australasian Speech Science and Technology Association Inc.  pdf

+Zhang, M., +Geissler, C., & Shaw, J. A. (2019). Gestural representations of tone in Mandarin: Evidence from timing alternations. In Sasha Calhoun, Paola Escudero, Marija Tabain & Paul Warren (eds.) Proceedings of the 19th International Congress of Phonetic Sciences, Melbourne, Australia 2019 (pp. 1803-1807). Canberra, Australia: Australasian Speech Science and Technology Association Inc.  pdf

Shaw, J. A., Best, C. T., Docherty, G., Evans, B., Foulkes, P., Hay, J., & Mulak, K. (2019). An information theoretic perspective on perceptual structure: cross-accent vowel perception. In Sasha Calhoun, Paola Escudero, Marija Tabain & Paul Warren (eds.) Proceedings of the 19th International Congress of Phonetic Sciences, Melbourne, Australia 2019 (pp. 582-586). Canberra, Australia: Australasian Speech Science and Technology Association Inc. pdf

new paper in Laboratory Phonology

New paper published open access in:  Laboratory Phonology: Journal of the Association for Laboratory PhonologyThe full text can be accessed through the DOI at the citation listed below.

Title: Resilience of English vowel perception across regional accent variation

Abstract: In two categorization experiments using phonotactically legal nonce words, we tested Australian English listeners’ perception of all vowels in their own accent as well as in four less familiar regional varieties of English which differ in how their vowel realizations diverge from Australian English: London, Yorkshire, Newcastle (UK), and New Zealand. Results of Experiment 1 indicated that amongst the vowel differences described in sociophonetic studies and attested in our stimulus materials, only a small subset caused greater perceptual difficulty for Australian listeners than for the corresponding Australian English vowels. We discuss this perceptual tolerance for vowel variation in terms of how perceptual assimilation of phonetic details into abstract vowel categories may contribute to recognizing words across variable pronunciations. Experiment 2 determined whether short-term multi-talker exposure would facilitate accent adaptation, particularly for those vowels that proved more difficult to categorize in Experiment 1. For each accent separately, participants listened to a pre-test passage in the nonce word accent but told by novel talkers before completing the same task as in Experiment 1. In contrast to previous studies showing rapid adaptation to talker-specific variation, our listeners’ subsequent vowel assimilations were largely unaffected by exposure to other talkers’ accent-specific variation.

How to Cite: Shaw, J. A., Best, C. T., Docherty, G., Evans, B. G., Foulkes, P., Hay, J., & Mulak, K. E. (2018). Resilience of English vowel perception across regional accent variation. Laboratory Phonology: Journal of the Association for Laboratory Phonology,9(1), 11. DOI: http://doi.org/10.5334/labphon.87