PhD Project 8

Giving speech a hand: how functional brain networks support gestural enhancement of language

 

PhD-candidate: Linda Drijvers
PIs: Asli Özyürek (WP4) and Ole Jensen (WP6)
Start date: 01 September 2015

(last update 2019-06-27)

Research Content

Face-to-face communication involves audio-visual binding of speech and gesture, both carrying semantic information to varying degrees. Using MEG, it is proposed for the first time to investigate how oscillatory neural interactions in an extended brain network reflect the integration of gesture and speech information and its time course when gestures can have enhancement effects; (a) during comprehension of degraded speech and (b) for subsequent memory of newly learned words. Results will integrate previous findings on the role of oscillations in speech comprehension, memory, and action observation and provide insights into how brain networks adjust to processing audio-visual input involving differential semantic information.

Highlight

Speech-gesture integration studied by rapid invisible frequency tagging
Team members: Drijvers, Spaak (DCCN), Ozyurek, and Jensen (UoB)

Rapid invisible frequency tagging (RIFT) was used to investigate the integration of audio-visual information in a semantic context. By tagging speech at 61Hz and gestures at 68 Hz, it was studied where auditory and visual information interacted in the brain (as defined by the intermodulation frequency at 7Hz (f2-f1)).

A proof of principle was provided that RIFT can be used to tag visual and auditory inputs at high frequencies. Second, it was demonstrated that RIFT can be used to identify intermodulation frequencies in a multimodal, semantic context. The observed intermodulation frequency was the result of the interaction between visually and auditory tagged stimuli, and was localized in LIFG and pSTS/MTG, areas known to be involved in speech-gesture integration. The strength of this intermodulation frequency was strongest when integration between speech and gestures was easiest.

In conclusion, it was thus proposed that the strength of this intermodulation frequency reflects the ease of semantic audio-visual integration and that the combined input interacts in down-stream higher order areas.

Sources of the intermodulation frequency (f2-f1) at 7Hz and individual scores in the left-frontotemporal ROI per condition A: Coherence change in percentage when comparing coherence values in the stimulus window to a post-stimulus baseline for 7Hz (intermodulation frequency, f2-f1), pooled over conditions. Only positive coherence values are plotted (>80% of maximum). No differences could be observed. B: Power change in percentage when comparing coherence values in the stimulus window to a post-stimulus baseline for 7Hz, pooled over conditions. Power changes were largest in left-frontal and left-temporal regions. Only positive coherence values are plotted (>80% of maximum). C: Power change values in percentage extracted from the 7Hz ROI with the 20%highest coherence values per condition. Raincloud plots reveal raw data, density and boxplots for power change.

Progress 2018

In 2018, the (outcomes of) the research project have been communicated during poster sessions, several invited talks as well as outreach events. Four articles were published in peer reviewed journals. Key-publication: Drijvers, L., Ozyurek, A., & Jensen, O. (2018). Hearing and seeing meaning in noise: Alpha, beta and gamma oscillations predict gestural enhancement of degraded speech comprehension. Human Brain Mapping, 39(5), 2075-2087.
In 2018, the research (outcomes) have been compiled into a coherent thesis for this PhD project. The formal thesis defence has taken place on May 13, 2019.

Groundbreaking characteristics

The current project was the first project that used RIFT to study how auditory and visual inputs interact in the brain, while combining knowledge from neuroscience, linguistics, psychology and engineering. This project benefitted from collaborations with researchers from Oxford University and University of Birmingham, who all provided unique insights from their respective fields to study language.