Thursday, January 31, 2019 - 3:30pm to 5:00pm
- South Hall 3605
Language changes constantly, in ways that can be constrained by factors both language-internal, such as word frequency, and language-external, such as social attitudes. A major challenge for linguistic theory is to give a unified explanation of these constraints on language change. In this talk, I argue that this challenge can be addressed by looking to spoken language perception, where passive but powerful perceptual biases give rise to many similar constraints on how listeners update the cognitive representations they draw upon for language use.
I present a theory of language change in which perceptual biases in the listener play a central role. To test this theory, I employ a computational approach, integrating experimentally-supported perceptual biases with computational modeling and novel corpus methods across two studies. In the first study, I build an empirically-grounded computational model to simulate word-frequency effects in sound change. I show that different word-frequency effects in different kinds of sound change follow from a single perceptual bias, whereby high-frequency words are recognized more easily than low-frequency words when acoustically ambiguous. In the second study, I extend the listener-based thinking to the effect of improving interethnic social attitudes on the spread of lexical items across ethnic groups in New Zealand. Drawing on biases in the perception of ‘other-accented’ words, I make specific predictions for the spread of the tag eh from indigenous M?ori to white P?keh?, which I test with novel corpus methods. Taken together, these two studies highlight how passive but powerful perceptual biases in the listener can give a unified explanation of different constraints on language change.
January 28, 2019 - 11:27am