Big Question 3

Creating a shared cognitive space: How is language grounded in and shaped by communicative settings of interacting people?

Language is a key socio-cognitive human function predominantly used in interaction. Yet, linguistics and cognitive neuroscience have largely focused on individuals’ coding-decoding signals according to their structural dependencies. Understanding the communicative use of language requires shifting the focus of investigation to the mechanisms used by interlocutors to share a conceptual space.

This big question considers the influence of two dimensions over multiple communicative resources (speech, gestures, gaze) and linguistic structures (from phonology to pragmatics), namely the temporal structure of communicative interactions and the functional dynamics of real-life communicative interactions.

There is deep collaboration between all BQ3 subprojects. The qualitative results that follow from the simulation studies will be related to the empirical findings from the other subprojects and vice versa, the empirical observations from the other subprojects will inspire the qualitative hypotheses to be tested. The cognitive agent-based simulation studies go beyond the empirical paradigm in the BQ3 project, because they allow us to test for qualitative differences in interactive behaviour by manipulating the cognitive capacities of the agents—something that is difficult to do with human test subjects—while simultaneously leading to explicit theories of computational mechanisms.

People involved

Steering group

Prof. dr. Ivan Toni
PI / Coordinator BQ3
Profile page

Dr. Mark Blokpoel
Coordinating Postdoc BQ3
Profile page

Team Members

Flavia Arnese - Research Assistant - Donders Institute for Brain, Cognition  and Behaviour | LinkedIn

Flavia Arnese
Research Assistant

Dr. Sara Bögels
Postdoc
Profile page

Prof. dr. Mirjam Ernestus
PI
Profile page

Loop | Stephen Levinson

Prof. dr. Stephen Levinson
PI
Profile page

Prof. Dr. Asli Ozyurek (@ozyurek_a) | Twitter

Prof. dr. Asli Ozyurek
PI
Profile page

Dr. Wim Pouw
Postdoc
Profile page

PhD Candidates

Lotte Eijk
PhD Candidate
Profile page

Marlou Rasenberg
PhD Candidate
Profile page

Collaborators

Laura van de Braak
Dr. Mark Dingemanse
Prof. dr. Christian Döller
Dr. Judith Holler
Dr. George Kachergis
Rui Lui
Dr. David Neville
Dr. Iris van Rooij
Dr. Marieke Woensdregt

Alumni

Linda Drijvers – PhD
James Trujillo – PhD
Dr. Branka Milivojevic – Postdoc

Research Highlights (2020)

Highlight 1

Computational challenges in explaining communication

Team members: Laura van de Braak, Mark Dingemanse, Ivan Toni, Iris van Rooij, and Mark Blokpoel

When people are unsure of the intended meaning of a word, they often ask for clarification. One way of doing so—often assumed in models of communication—is to point at a potential target: “Do you mean [points at the rabbit]?” However, what if the target is unavailable? Then the only recourse is language itself, which seems equivalent to pulling oneself up from a swamp by one’s hair. We created two computational models of communication, one able to point (Figure 1, orange) and one not (Figure 1, blue). The latter incorporates additional sophisticated inference to resolve the meaning of non-pointing signals.

Figure 1. Left: Structure for ostensive (orange) and non-ostensive (blue) dialogue for one intended referent X. Black, orange and blue arrows denote input-output of information. Gray arrows denote agents’ decisions for continuing or ending the dialogue. Agents remember all turns and take this into account for future inferences. The key difference is that ostensive agents base future inferences on the ostensively declared referent Y (orange rectangle) whereas the non-ostensive agents base future inferences on a verbal signal S’ (blue rectangle). The initiator infers referent Z from S’ to determine whether they think that the responder understands them. Right: The inference mechanisms driving both models. Agents compute a probability distribution over all possible lexicons given their lexical bias and the dialogue history to attempt to infer a common lexicon. Non-ostensive agents (blue) require additional inference to infer the meaning of clarification requests s’, whereas ostensive agents (orange) get unambiguous feedback r.

The simulation results confirm that ostensive agents can achieve factual understanding, and they underscore the difficulty of computationally explaining non-ostensive communication. Without referentially clear signals, non-ostensive agents have no way of knowing when their inferences are factually correct and understand each other only at chance level (see also Figure 2). The challenge is clear: Without direct feedback, what computational infrastructure allows communicators to attain sufficient meta-understanding about their state of factual understanding? Given that the model presented here is not lacking in inferential capacity, it seems that more reasoning of the same kind is not the right answer. Computational theories of communication need to be expanded with a different kind of reasoning, one that explains how people can use context, background knowledge and other semiotic resources to attain sufficient meta-understanding.

Figure 2. Structure of dialogue progression for ostensive (above) and non-ostensive agents (below). Conversation progresses from left to right. Each graph shows for the ith dialogue the distribution of agent pairs over the repair sequence length. The rightmost bars in each graph shows how many agent pairs gave up in that dialogue and the colors indicate factual understanding.

Highlight 2

Creating shared (neural) representations

Team members: Sara Bögels, Branka Milivojevic, Flavia Arnese, and Ivan Toni

Empirical paradigm: The empirical part of BQ3 (see Figure 2A for an overview) is a study on about 70 pairs of participants engaged in face-to-face communicative interactions. Each pair needs to find their way of uniquely identifying novel objects (“Fribbles”, see Barry et al., 2014). Through repeated interactions about each Fribble, pair-specific labels emerge. We consider several elements of the face-to-face interactions in each pair, e.g. speech is transcribed, co-speech gestures and pragmatic devices are identified. These multimodal measurements are meant to identify regularities in how pairs achieve mutual understanding. We focus on linguistic alignment (e.g., syntactic, phonological/phonetic, lexical, and semantic alignment); gestural alignment; and pragmatic devices (e.g., backchannels and repair). Before and after each pair engaged in the face-to-face interactions, we measured the participants’ individual representations of the Fribbles, using both fMRI and behavioural metrics. We hypothesise that the individual representations of the Fribbles will change as a function of the level of alignment achieved during the face-to-face interaction. We aim to uncover which variables measured during the interaction are the major contributors to the emergence of conceptual alignment during communication.

Figure 2A. Overview of the empirical paradigm.

One aim of this project is to investigate how the conceptual representations of two communicators change and become more similar as a result of communication. To investigate this, we measure participants’ individual representations of the Fribbles (novel objects) both before and after a series of communicative interactions about these objects (see empirical paradigm above), using both neural and behavioural metrics. The communicative interactions are structured in a ‘director-matcher’ task (e.g., Clark & Wilkes-Gibbs, 1986) in which each member of a pair takes turns describing the Fribbles to the other.

In the naming task, participants name each Fribble, using one to three words. We compare the names given by the members of a pair to the same Fribble, before and after their communicative interactions, using similarity scores between the vector-based models of those words (Mandera, Keuleers, & Brysbaert, 2017). The similarity score reflects the co-occurrences of those words in large text corpora. A preliminary analysis on 51 pairs shows increased similarity scores after those pairs engaged in face-to-face communicative interactions over those Fribbles (real pairs, Figure 2B, left panel). The increased similarity score is driven by the communicative interactions of each pair: there is no change in the similarity scores of random pairs, i.e. pairs of participants that performed the same tasks and experienced the same interaction, but not with each other (i.e., random pairs, Figure 2B, right panel).

Figure 2B. Preliminary results (51 pairs) for naming similarity before (pre) and after (post) the interaction between names for the same Fribbles from real pairs, that interacted with each other (left), and random pairs that did not (right).

Similar observations emerge from an independent behavioural measure of participants’ mental representations of the Fribbles. In the features task, participants rate each Fribble across a number of features based on more visual (e.g., “How rounded is this Fribble?”), and more abstract (e.g., “How human is this Fribble”) properties of the objects (based on Binder et al., 2016). For each Fribble, we correlated the scores obtained across 29 features between two members of a pair (real pairs, Figure 2C, left panel) as well as between members of random pairs (Figure 2C, right panel). Preliminary results suggest that there is indeed an increase in feature-based similarity for real pairs but not random pairs after the interaction.

Figure 2C. Preliminary results (51) pairs for feature correlations before (pre) and after (post) the interaction between names for the same Fribbles from real pairs (left) and random pairs (right).

This project aims to characterize the neural mechanisms that lead to the increased similarity in communicators’ mental representations of the Fribbles. We do that by using fMRI to measure neurovascular responses to visual presentations of the Fribbles, one by one, before and after each participant is engaged in the communicative interactions. By using Representational Similarity Analysis (RSA, Kriegeskorte, Mur, & Bandettini, 2008) – a type of analysis which uses correlations of fMRI activity patterns as a proxy for similarity  of neural responses to different Fribbles – we can quantify how the relations between the Fribbles change within as well as between members of real pairs and random pairs. More precisely, we first measure the activation pattern elicited by each Fribble in a particular brain region of each participant. Second, we correlate the activation patterns for different Fribbles in the same participant, leading to an RSA matrix of correlations that is specific to that participant (see Figure 2D). These matrices will then be correlated between participants to see how similar their Fribbles’ representations are. We hypothesise that Fribbles’ representations become more similar following an interaction in real pairs, but not in random pairs, and we hypothesise that this effect would vary as a function of the level of alignment achieved during the face-to-face interaction. This project also aims to understand how these changes in neural representations across different brain regions are influenced by particular metrics of the communicative interaction.

Figure 2D. Overview of the planned fMRI analysis using representational similarity analysis (RSA) to compare brain activation patterns of two participants.

References Highlight 2:

Binder, J. R., Conant, L. L., Humphries, C. J., Fernandino, L., Simons, S. B., Aguilar, M., & Desai, R. H. (2016). Toward a brain-based componential semantic representation. Cognitive Neuropsychology33(3–4), 130–174. https://doi.org/10.1080/02643294.2016.1147426

Clark, H. H., & Wilkes-Gibbs, D. (1986). Referring as a collaborative process. Cognition22(1), 1–39. https://doi.org/10.1016/0010-0277(86)90010-7

Kriegeskorte, N., Mur, M., & Bandettini, P. A. (2008). Representational similarity analysis – connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience2. https://doi.org/10.3389/neuro.06.004.2008

Mandera, P., Keuleers, E., & Brysbaert, M. (2017). Explaining human performance in psycholinguistic tasks with models of semantic similarity based on prediction and counting: A review and empirical validation. Journal of Memory and Language92, 57–78. https://doi.org/10.1016/j.jml.2016.04.001