It has been generally accepted that phonetic representations of speech based on acoustic properties are distinct from ones based on their articulatory properties, and much ink has been spilled on arguing their respective merits. Recent work has also shown that their neural activation patterns are also distinct (e.g. Cheung, Hamilton, Johnson & 2016). This leads to an important question: how do we align the distinct patterns associated with the articulation and the acoustics of the same utterance in order to guide behaviors that demand sensorimotor interaction, such as vocal learning and the use of feedback during speech production? The hypothesis that I have been pursuing in recent work is that while the representations are distinct, their patterns of change over time (temporal modulation) are systematically related. Preliminary results, using paired articulatory and acoustic data from the X-ray microbeam corpus, show that modulation in both articulatory movement and in the changing acoustics has the form of a pulse-like structure. The articulatory and acoustic pulses are approximately aligned with each other in time, and the modulation functions are robustly correlated. Moreover, the pulses appear to be related to syllable structure, on average exhibiting one pulse per mora. The possibility of spatiotemporal modulation as a basis for the mora and syllable structure more generally will be explored in the talk.
November 19, 2019 - 10:59am