Big Question 5

The inferential cognitive geometry of language and action planning: Common computations?

The efficiency and flexibility with which humans generate meaning during language comprehension (or production) is remarkable. How does our brain do it? To move beyond the many extant attempts to address this big quest, BQ5 will treat linguistic inference as an instance of an advanced generative planning solution to the multi-step, sequential choice problems that we also face in other cognitive domains (e.g. chess, foraging and spatial navigation). Thus, BQ5 anticipates making unique progress in unravelling the mechanisms of fast, flexible and generative linguistic inference by leveraging recent major advances in our understanding of the representations and computations necessary for sequential model-based action planning. This approach will also lead us to revise current dual-system dogma’s in non-linguistic domains, that have commonly over-focused on the contrast between a cognitive (flexible, but slow) and a habitual (fast, but inflexible) system: The current quest will encourage the integration of so-called ‘cognitive habits’ and their associated cognitive map-related neural mechanisms into theoretical models of both linguistic and non-linguistic inference.

We will leverage current rapid conceptual and methodological progress in our understanding of other cognitive systems, such as the ‘cognitive mapping’ mechanisms for action planning (Behrens et al., 2018; Bellmund et al., 2018), as well as predictive inference in perception (Martin, 2016; Martin, 2020), to advance our understanding of how we generate meaning in the state space of language. In non-linguistic problems, the goal state is a function of the reward that is to be maximized. In the linguistic problem that we consider here, the goal state is the compositional meaning that needs to be generated during comprehension and production. Leveraging the recently developed approaches to understand perceptual inference and action planning, we will contribute unique advances in our understanding of the neural code and computations that underlie the unbounded combinatoriality of language, i.e., the ease with which we can generate meaning.

People involved

Steering group

CCN 2019 • Confirmed Speakers • 2019 Conference on Cognitive Computational  Neuroscience | 13-16 September 2019 | Berlin, Germany

Prof. dr. Roshan Cools
PI / Coordinator BQ5
Profile page

Dr. Xiaochen Zheng
Coordinating Postdoc BQ5
Profile page

Dr. Andrea Martin
PI / Coordinator BQ5
Profile page

Team members

Hanneke den Ouden — Motivational & Cognitive Control

Dr. Hanneke den Ouden
PI
Profile page

Dr. Silvy Collin
PI
Profile page

Dr. Stefan Frank
Coordinator BQ1
Tenure track researcher
Profile page

Prof. dr. Peter Hagoort
Programme Director
PI / Coordinator BQ2
Profile page

Dr. Ashley Lewis
Coordinating Postdoc BQ2
Profile page

Dr. Ioanna Zioga
Postdoc
Profile page

Anna Aumeistere
Clinical Trial Coordinator

Roel Willems – ELIT

Dr. Roel Willems
PI
Profile page

Prof. dr. Ivan Toni
PI / Coordinator BQ3
Profile page

Prof. dr. Iris van Rooij
PI
Profile page

Dr. Bob van Tiel
Postdoc
Profile page

Rene Terporten — Motivational & Cognitive Control

Dr. Rene Terporten
Postdoc
Profile page

Dr. Marieke Woensdregt
Postdoc
Profile page

PhD Candidates

Elena.jpg

Elena Mainetto
PhD Candidate
Profile page

Collaborators

Dr. Mark Blokpoel
Dr. Monique Flecken
Dr. Naomi de Haas
Dr. Saskia Haegens
Dr. Yingying Tan

Alumni

Dr. Branka Milivojevic – Postdoc
Dr. Mona Garvert

Research Highlights (2021)

Highlight 1

Generalization and representation of novel compositional word meanings

Team members: Xiaochen Zheng, Mona Garvert, Jonne Roelofs (intern), Andrea Martin, Hanneke den Ouden, and Roshan Cools

The ability to generalize previously learned information to novel situations is fundamental for adaptive behaviour. The project investigates how we generate novel, compositional meaning (e.g., understanding the word “un-rejectable-ish”). To this end, we conducted an exploratory study (n = 36) followed by a replication study (n = 72) in which we taught participants compositional words from an artificial language and tested them with novel words using a semantic priming task.

Participants learned compositional pseudo-words consisting of a known stem and an unknown affix. Crucially, the affix alters the word meaning depending on its position and therefore leads to unique compositional meanings in different sequential combinations with the stems (Figure 1A, “learning”). Our results showed that participants de-composed word meaning into its constituent parts and were able to generalize the acquired knowledge to understand novel pseudo-words, as reflected by a semantic priming effect (Figure 1B): They were faster in judging the semantic meaning of a target synonym word when the preceding compositional words suggested a similar meaning, compared to a mismatch compositional word. Moreover, sequential order plays an essential role for word meaning (de-)composition and participants learned the affix meaning as a function of its position (i.e., pre- or post-stem), as reflect by larger priming effect when the preceding compositional word is congruent than incongruent in sequential order rules as for the learning (Figure 1B). Individual performance depends on whether participants accounted for the sequential order rules or not (Figure 1C).

Figure 1. Experimental paradigm and main behavioral findings from the replication study. (A) Participants learned compositional pseudo-words consisting of a known stem (e.g., “good” in “good-kla”) and an unknown affix (e.g., “kla”). The affix alters the word meaning depending on its position (e.g., “-kla” as a suffix means the opposite, whereas “kla-” as a prefix means “young version”). We tested participants’ knowledge with novel, compositional pseudo-words using a semantic priming task, where they have to make a semantic judgement task on the target word following the priming pseudo-words which is either congruent in sequential order (e.g., “rich-kla”, where “-kla” means “opposite”), or incongruent in order (e.g., “kla-rich”, where “-kla” but not “kla-” means “opposite”), or in a totally mismatched meaning regardless of order (e.g., “rich-ran/ran-rich”, where neither “-ran” or “ran-” means “opposite”) as in the learning. (B). Raincloud plots of median reaction times of the semantic judgement task across three experimental conditions. The outer shapes represent the distribution of the data over participants, the thick horizontal line inside the box indicates the group median, and the bottom and top of the box indicate the group-level first and third quartiles of each condition. Each dot represents one participant. (C). Raincloud plots of median reaction times of the semantic judgement task across three experimental conditions, split between two participant groups. The BLEND participants did not account for the sequential order rule in meaning composition, whereas the BUILD participants did.
***p < .001; ***p < .001; * p< .05; n.s. not significant.

The current project investigates structural inference language and leverages knowledge from vision, memory, and planning. It does not only extend our knowledge on the flexible and efficient nature of language processing – the key question addressed by BQ5, but also probe its possibly shared mechanisms with other cognitive domains. The current project is challenging because of the different languages spoken by and common conceptual misalignment between the psycholinguists versus neuroscientists on learning and decision making. Nevertheless, through active, resilient, and well-coordinated team science, an integrative novel design, unique ideas and preliminary advance in understanding were achieved not otherwise possible.

Highlight 2

The role of alpha and beta oscillations in naturalistic language processing

Team members: Ioanna Zioga, Hugo Weissbart, Ashley Lewis, Saskia Haegens, and Andrea Martin

In this study, we aim to test whether during naturalistic speech processing, internal brain rhythms, and specifically alpha and beta oscillations, are organized in the same way as during low-level sensory processing. Twenty-five Dutch native speakers listened to stories in spoken Dutch and French while MEG was recorded. We used dependency parsing to identify 3 dependency states (just opened, currently open, resolved) at each word of the stories, and used that to quantify cognitive load and reactivation strength during sentence comprehension. We then constructed encoding models (temporal response functions) to predict alpha and beta power from speech features.

We constructed encoding models with the features of interest (number of just opened / currently open / resolved dependencies) controlling for the low-level linguistic features of word onset and word frequency, with alpha (8-13 Hz) and beta (15-30 Hz) band power as the dependent variables. Reconstruction accuracy of the models was measured as the correlation between the actual brain signal and the reconstructed signal following a leave-one-out cross-validation approach. Results showed that reconstruction accuracy was significantly higher when comparing the actual models with null models (i.e. models in which we replaced their actual features with features from a different story), in both frequency bands, especially at temporal areas (see Figure 2 below). Overall, the dependency features successfully predict alpha and beta power beyond low-level linguistic features. Our findings reveal a link between the functional role of alpha and beta oscillations implicated in perceptual domains and language.

Figure 2. Comparison between the reconstruction accuracy of the actual models vs. the null models of each feature for the Dutch stories. Topoplots depict t values and significant sensor clusters are highlighted.

This study expands our understanding of the functional role of brain oscillations during language comprehension, and, critically, the generalizability of these dynamics from lower-level perceptual to higher-level cognitive processing. Our findings will contribute to BQ5’s overarching aim on better understanding how we generate meaning during language processing.

Our research spans different levels of cognition, from low-level processing to high-level language comprehension, and combines research on brain oscillations with linguistic knowledge. This project would only be possible with the joint and well-coordinated team science between experts in their fields examining the research aspects from different perspectives, thus resulting in a complete and advanced investigation of the topic.