Abstract for my poster (with Shigeto Kawahara) at the Chicago Linguistic Society Workshop on Dynamic Modeling in Phonetics and Phonology coming up on May 24th:
Modelling articulatory dynamics in the frequency domain
We explore a new approach to modelling articulatory dynamics in terms of frequency components. Our data comes from fleshpoint tracking using Electromagnetic Articulography. Using Discrete Cosine Transform (DCT), we decomposed tongue dorsum movement trajectories over VCu̥CV and VCuCV sequences in Japanese into cosine components of differing frequency and amplitude. We demonstrate that four such components represent the signal with high precision. Making use of these compact representations, we evaluate whether the devoiced vowel in Japanese is specified for a lingual articulatory gesture or whether it is targetless. We evaluated these competing hypotheses through simulation. Tongue dorsum trajectories were simulated from DCT components with either zero amplitude for the frequency component corresponding to [u̥] or with the amplitude for this component found for the voiced counterpart [u]. The simulated data were then used to classify the experimental data. On a token-by-token basis, we assessed the posterior probability of a lingual gesture for [u̥]. Results revealed that, under some conditions, Japanese speakers produce transitions between consonants without an intervening vowel. More broadly, we see promise in using frequency spaces to link low dimensional phonological hypothesis to time-dependent articulatory data.