The inferential cognitive geometry of language and action planning: Common computations?
The efficiency and flexibility with which humans generate meaning during language comprehension (or production) is remarkable. How does our brain do it? To move beyond the many extant attempts to address this big quest, BQ5 will treat linguistic inference as an instance of an advanced generative planning solution to the multi-step, sequential choice problems that we also face in other cognitive domains (e.g. chess, foraging and spatial navigation). Thus, BQ5 anticipates making unique progress in unravelling the mechanisms of fast, flexible and generative linguistic inference by leveraging recent major advances in our understanding of the representations and computations necessary for sequential model-based action planning. This approach will also lead us to revise current dual-system dogma’s in non-linguistic domains, that have commonly over-focused on the contrast between a cognitive (flexible, but slow) and a habitual (fast, but inflexible) system: The current quest will encourage the integration of so-called ‘cognitive habits’ and their associated cognitive map-related neural mechanisms into theoretical models of both linguistic and non-linguistic inference.
We will leverage current rapid conceptual and methodological progress in our understanding of other cognitive systems, such as the ‘cognitive mapping’ mechanisms for action planning (Behrens et al., 2018; Bellmund et al., 2018), as well as predictive inference in perception (Martin, 2016; Martin, 2020), to advance our understanding of how we generate meaning in the state space of language. In non-linguistic problems, the goal state is a function of the reward that is to be maximized. In the linguistic problem that we consider here, the goal state is the compositional meaning that needs to be generated during comprehension and production. Leveraging the recently developed approaches to understand perceptual inference and action planning, we will contribute unique advances in our understanding of the neural code and computations that underlie the unbounded combinatoriality of language, i.e., the ease with which we can generate meaning.
Prof. dr. Roshan Cools
PI / Coordinator BQ5
Dr. Xiaochen Zheng
Coordinating Postdoc BQ5
Dr. Hanneke den Ouden
Dr. Silvy Collin
Dr. Stefan Frank
Tenure track researcher
Prof. dr. Peter Hagoort
PI / Coordinator BQ2
Dr. Ashley Lewis
Coordinating Postdoc BQ2
Dr. Ioanna Zioga
Dr. Marius Braunsdorf
Dr. Roel Willems
Prof. dr. Ivan Toni
PI / Coordinator BQ3
Prof. dr. Iris van Rooij
Dr. Bob van Tiel
Dr. Rene Terporten
Dr. Marieke Woensdregt
Dr. Mark Blokpoel
Dr. Monique Flecken
Dr. Naomi de Haas
Dr. Saskia Haegens
Dr. Yingying Tan
Dr. Branka Milivojevic – Postdoc
Dr. Mona Garvert
Research Highlights (2021)
Generalization and representation of novel compositional word meanings
Team members: Xiaochen Zheng, Mona Garvert, Jonne Roelofs (intern), Andrea Martin, Hanneke den Ouden, and Roshan Cools
The ability to generalize previously learned information to novel situations is fundamental for adaptive behaviour. The project investigates how we generate novel, compositional meaning (e.g., understanding the word “un-rejectable-ish”). To this end, we conducted an exploratory study (n = 36) followed by a replication study (n = 72) in which we taught participants compositional words from an artificial language and tested them with novel words using a semantic priming task.
Participants learned compositional pseudo-words consisting of a known stem and an unknown affix. Crucially, the affix alters the word meaning depending on its position and therefore leads to unique compositional meanings in different sequential combinations with the stems (Figure 1A, “learning”). Our results showed that participants de-composed word meaning into its constituent parts and were able to generalize the acquired knowledge to understand novel pseudo-words, as reflected by a semantic priming effect (Figure 1B): They were faster in judging the semantic meaning of a target synonym word when the preceding compositional words suggested a similar meaning, compared to a mismatch compositional word. Moreover, sequential order plays an essential role for word meaning (de-)composition and participants learned the affix meaning as a function of its position (i.e., pre- or post-stem), as reflect by larger priming effect when the preceding compositional word is congruent than incongruent in sequential order rules as for the learning (Figure 1B). Individual performance depends on whether participants accounted for the sequential order rules or not (Figure 1C).
The current project investigates structural inference language and leverages knowledge from vision, memory, and planning. It does not only extend our knowledge on the flexible and efficient nature of language processing – the key question addressed by BQ5, but also probe its possibly shared mechanisms with other cognitive domains. The current project is challenging because of the different languages spoken by and common conceptual misalignment between the psycholinguists versus neuroscientists on learning and decision making. Nevertheless, through active, resilient, and well-coordinated team science, an integrative novel design, unique ideas and preliminary advance in understanding were achieved not otherwise possible.
The role of alpha and beta oscillations in naturalistic language processing
Team members: Ioanna Zioga, Hugo Weissbart, Ashley Lewis, Saskia Haegens, and Andrea Martin
In this study, we aim to test whether during naturalistic speech processing, internal brain rhythms, and specifically alpha and beta oscillations, are organized in the same way as during low-level sensory processing. Twenty-five Dutch native speakers listened to stories in spoken Dutch and French while MEG was recorded. We used dependency parsing to identify 3 dependency states (just opened, currently open, resolved) at each word of the stories, and used that to quantify cognitive load and reactivation strength during sentence comprehension. We then constructed encoding models (temporal response functions) to predict alpha and beta power from speech features.
We constructed encoding models with the features of interest (number of just opened / currently open / resolved dependencies) controlling for the low-level linguistic features of word onset and word frequency, with alpha (8-13 Hz) and beta (15-30 Hz) band power as the dependent variables. Reconstruction accuracy of the models was measured as the correlation between the actual brain signal and the reconstructed signal following a leave-one-out cross-validation approach. Results showed that reconstruction accuracy was significantly higher when comparing the actual models with null models (i.e. models in which we replaced their actual features with features from a different story), in both frequency bands, especially at temporal areas (see Figure 2 below). Overall, the dependency features successfully predict alpha and beta power beyond low-level linguistic features. Our findings reveal a link between the functional role of alpha and beta oscillations implicated in perceptual domains and language.
This study expands our understanding of the functional role of brain oscillations during language comprehension, and, critically, the generalizability of these dynamics from lower-level perceptual to higher-level cognitive processing. Our findings will contribute to BQ5’s overarching aim on better understanding how we generate meaning during language processing.
Our research spans different levels of cognition, from low-level processing to high-level language comprehension, and combines research on brain oscillations with linguistic knowledge. This project would only be possible with the joint and well-coordinated team science between experts in their fields examining the research aspects from different perspectives, thus resulting in a complete and advanced investigation of the topic.