Big Question 1

(last update 2020-04-28)

The nature of the mental lexicon: How to bridge neurobiology and psycholinguistic theory by computational modelling?

This Big Question addresses how to use computational modelling to link levels of description, from neurons to cognition and behaviour, in understanding the language system. Focus is on the mental lexicon and the aim is to characterize its structure in a way that is precise and meaningful in neurobiological and (psycho)linguistic terms. The overarching goal is to devise causal/explanatory models of the mental lexicon that can explain neural and behavioural data. This will significantly deepen our understanding of the neural, cognitive, and functional properties of the mental lexicon, lexical access, and lexical acquisition.

The BQ1 team takes advantage of recent progress in the understanding of modelling realistic neural networks, improvements in neuroimaging techniques and data analysis, and developments in accounting for the semantic, syntactic and phonological properties of words and other items stored in the mental lexicon. Using one common notation ‒high-dimensional numerical vectors‒ neurobiological and computational (psycho)linguistic models of the mental lexicon are integrated and methods are developed for comparing model predictions to large-scale neuroimaging data.

BQ1 thus comprises three main research strands, respectively focusing on models of lexical representation, models of neural processing, and methods for bridging between model predictions and neural data. It is taken into account that lexical items rarely occur in isolation but form parts of (and are interpreted in the context of) sentences and discourse. Moreover, the BQ1-team refrains from prior assumptions about what the lexical items are, that is, lexical items do not need to be equivalent to words but may be smaller or large units.

Thus, the Big Question is tackled from three directions:
(i) by investigating which vector representations of items in the mental lexicon are appropriate to encode their linguistically salient (semantic, combinatorial, and phonological) properties;
(ii) by developing neural processing models of access to, and development of, the mental lexicon; and
(iii) by designing novel evaluation methods and accessing appropriate data for linking the models to neuroimaging and behavioural data.

The BQ1 endeavour is inherently interdisciplinary in that it applies computational research methods to explain neural, behavioural, and linguistic empirical phenomena. One of its main innovative aspects consists of bringing together neurobiology, psycholinguistics, and linguistic theory (roughly corresponding to different levels of description of the language system) using a single mathematical formalism; a feat that requires extensive interdisciplinary team collaboration. Thus, BQ1 integrates questions of a Linguistic, Psychological, Neuroscientific, and Data-analytic nature.

Highlights from 2019

Highlight 1: Discovering words in image-grounded speech using a deep neural network

Team members: Danny Merkx and Stefan Frank

We investigated to what extent it is possible to discover English words from pairs of images and unprocessed spoken descriptions of these images. We trained a deep neural network (DNN) to develop shared representations of images and speech (see Figure 1) and then analysed how well the sentence’s content words could be decoded from the layers of the speech-processing part of the DNN.

Our DNN displayed the best performance on speech-to-image retrieval to date. More importantly, word decodability was fairly high (see Figure 2) and peaked in the intermediate layers of the DNNs. These intermediate layers encode more abstract units than the audio input features but are not yet sensitive to the image semantics, hence, this is exactly the level where we would expect words to be encoded. Because the model is not explicitly optimized to encode word units, our results show that the word is a useful unit for speech-to-image mapping that can be discovered by a DNN.

Figure 1. Structure of the speech-to-image DNN (image taken from Merkx, Frank, & Ernestus, 2019).Figure 2. Word-detection score as a function of detection threshold (horizontal axis) and the DNN layer under investigation (image taken from Merkx, Frank, & Ernestus, 2019).

This work is highly inerdisciplinary as it critically depends on expertise in artificial intelligence (deep learning systems), psycholinguistics, and human speech processing. Hence, the collaboration afforded by LiI team science was of substantial importance.

The development of computationally explicit models contributes to the overarching quest of LiI because it is instrumental in bridging between functional, algorithmic, and implementational (neural) levels of explanation; and thereby coming to a comprehensive understanding of observed language phenomena. More specifically, implemented statistical models of language processing form testable theories of how properties of the cognitive system interact with properties of the language, which speaks to the question of boundary conditions of language and language use.

Highlight 2: Representational Stability Analysis

Team members: Samira Abnar, Lisa Beinborn, and Willem Zuidema

We used  ‘Representational similarity analysis’ (RSA), a technique popular in cognitive neuroscience, and adapt it to help analyze a number of different models currently popular in Natural Language Processing, in particular Bert, ELMO, the Google Language Model (GoogleLM), and the Universal sentence encoder.

A novel application of RSA is for Representational Stability Analysis (RSA) that compares instances of the same model, while systematically varying a single model parameter. As a case study, we used RSA to analyze neural language encoding models. We focused on a single parameter: the length of the prior context presented to a model. Varying the amount of context allowed us to  quantify the degree of context-dependence of different neural language models, and different components of those models. If internal representations are similarly organized regardless of how much additional context is presented to the model, context-dependence is low. If, on the other hand, representations change with each additional amount of context included, context-dependence is high. The figure above shows that the second layer (L1) of the GoogleLM is much more sensitive to prior context than the first layer (L0); e.g., the representational similarity of activity of the second layer between conditions with no prior context or 7 prior sentences is only 50%.

Figure 3. Depiction of the first and second layers of the GoogleLM model and the representational similarity of activity in the layer depending on the amount of available prior context.

Our work illustrates the benefits of interdisciplinary work, by working out an innovative adaptation of a technique from cognitive neuroscience for use in natural language processing. It gives us more insights in how information is processed in modern natural language processing (NLP)-models, which in turn helps us to better interpret the use of these models for predicting brain activation, and thus, ultimately, to make progress on the overarching quest of LiI.

This work would not have been possible without the input over the years from members of other BQ1 projects.

Synergy with other Big Questions

  • The developed models can be evaluated against neuroimaging data that is collected in BQ2 and BQ4.
  • The role of prior structure in the language network, which is being investigated within BQ2, is also of great importance to the neural processing models developed in BQ1, and it might be able to integrate findings and test specific hypotheses from BQ2 (e.g., on connectivity) within the BQ1 computational models.
  • Individual differences can be captured by variance in the orthographic/phonological, morpho-syntactic, and semantic vector representations developed in BQ1 (e.g., due to difference in training data or parameter setting), which may be able to account for findings from BQ4.
  • Language models have come to play an important role in one of the work packages of the BQ5 proposal, where Frank plays a role as co-investigator.
  • Vector-based semantics can be applied in BQ3 to model how (shared) representations of novel concepts are shaped by communication.

People involved

Collaborators

Dr. Renato Duarte, Forschungszentrum, Jülich, Germany
Prof. dr. Abigail Morrison, Bernstein Center, Freiburg, Germany