Big Question 5

(last update 2020-05-12)

The inferential cognitive geometry of language and action planning: Common computations?

The efficiency and flexibility with which humans infer (or generate) meaning during language comprehension (or production) is remarkable. How does our brain do it? To move beyond the many extant attempts to address this big quest, BQ5 will treat linguistic inference as an advanced solution to the multi-step, sequential choice problems that has been long faced in other cognitive domains (e.g. chess, foraging and spatial navigation). Specifically, BQ5 anticipates to make unique progress in unravelling the mechanisms of fast, flexible linguistic inference by leveraging recent major advances in our understanding of the representations and computations necessary for sequential model-based action planning. This approach will also lead us to revise current dual-system dogma’s in non-linguistic domains, that have commonly over-focused on the contrast between a cognitive (flexible, but slow) and a habitual (fast, but inflexible) system: The current quest will encourage the integration of so-called ‘cognitive habits’ and their associated cognitive map-related neural mechanisms into theoretical models of both linguistic and non-linguistic inference.

We will leverage current rapid conceptual and methodological progress in our understanding of ‘cognitive mapping’ mechanisms for action planning (Behrens et al., 2018; Bellmund et al., 2018) to advance our understanding of how we generate meaning in the state space of language. In non-linguistic problems, the goal state is a function of the reward that is to be maximized. In the linguistic problem that we consider here, the goal state is the compositional meaning that needs to be generated during comprehension and production. Leveraging the recently developed approaches to understand action planning, we will contribute unique advances in our understanding of the neural code and computations that underlie the unbounded combinatoriality of language, i.e., the ease with which we can generate meaning.

Synergy with other Big Questions

- BQ5 will benefit directly from the (e.g. semantic vector space) models of the mental lexicon and prior structure that will be developed in BQ1 when developing experimental paradigms and models for assessing cognitive mapping-based linguistic inference.

- BQ5 will benefit from the predictive analysis methods, in particular using MEG. For this reason, the BQ5 team includes a BQ2 postdoc with expertise in predictive coding and electrophysiology.

- Synergies are anticipated between BQ3 and BQ5, given a shared interest in understanding how agents navigate and organize conceptual spaces. Specifically, BQ5 will benefit from methodological and conceptual design efforts in BQ3 to assess similarities between representations of novel objects within and between interlocutors. For this reason, a BQ3 member (Van Rooij) are also BQ5 members.

These various advances in BQ1, 2 and 3 will be integrated in BQ5 which will have significant supra-additive effects by filling the key neurocognitive gap between insights about mental lexicon representations (BQ1), the core language network (BQ2), and communicative intent (BQ3): How does our brain operate on the representational state space of language to infer the meaning that one wants to communicate? Bridging these different big questions cannot be achieved by an individual scientist, but requires synergy of expertise across methods, across cognitive domains and across BQ teams.

People involved