PhD Project 2
Driving forces behind perceptual adaptation in speech
Spoken-language comprehension requires dynamic adjustments in how the acoustic signal is mapped onto abstract phonetic categories. This enables listeners to deal rapidly and effectively with many sources of variability in speech. Several specific mechanisms have been documented which demonstrate that the re-tuning of phonetic categories can be driven by a range of different kinds of information. This project asks whether these mechanisms converge or if they are in fact separate phenomena. Four different types of adaptation will be investigated in terms of their behavioural characteristics and their effect(s) on neural processing in auditory cortex.
Although this project is primarily in the domain of cognitive neuroscience it has substantial involvement of other disciplines: experimental psychology, psycholinguistics, and linguistics. Project 2 investigates learning mechanisms in speech for the first time with ultra-high-field fMRI and state-of-the art analysis techniques.
The project is in the process of exploring how phonetic boundaries can be recalibrated with contextual information contained in speech. A behavioural study has been completed which investigated how a listener’s lexical knowledge can affect how an ambiguous sound is perceived, in contrast with how the speaker’s lip movements can also affect how the listener perceives an ambiguous sound. The study established that listeners are able to move their perceptual boundary back and forth between a phoneme pair, such that the same ambiguous sound can be perceived as either one of the two phonemes, depending on the context in which the sound was presented. Listeners can also adjust their phoneme boundaries in accordance with the lexical or lip-reading information being presented to them. Lip-reading information, however, appears to be more effective than lexical information in inducing a shift in the perceptual boundary. Listeners cannot recalibrate these boundaries as efficiently when lexical and lip-reading information are alternated rapidly. The two sources may be alternated in a slower manner, such that the switches occur after three to four minutes, and listeners can still recalibrate when both types of information are presented within the same session.
An fMRI study is currently in progress to elucidate the neural mechanisms underlying the recalibration process, and how they may differ depending on the source of the recalibration, that is, the lexical or lip-reading information. Additional experiments are planned to explore recalibration in non-native speakers, to see if these listeners adapt similarly to native speakers.