PhD Project 13

Neurobiologically realistic computational models of language processing

 

PhD-candidate: Marvin Uhlmann
PIs: Karl-Magnus Petersson (WP3) and Peter Hagoort (WP3)
Start date: 01 February 2015

(last update 2019-06-27)

Research Content

This project entails a team effort (utilizing results from neurophysiology and techniques from computational neuroscience) to understand the neurobiology underlying language processing by building a neurobiological causal model for sentence processing based on recurrent networks of spiking neurons. The goal is to develop a model with processing memory based on vector representations of words which are incrementally interpreted in terms of thematic roles. A core objective is to investigate the computational role of different properties of the neuron and the neural network and to investigate the neurobiological computational mechanisms to solve language-relevant tasks, such as binding.

Highlight

One specific instance (the NBL model) of the notion of a neurobiological causal model was developed and investigated by which the benefits of neurobiological causal models was shown.
The figure shows the different aspects of the NBL model (in the center) that were investigated: (1) how information can be encoded in a neurobiologically plausible way, (2) what functional impact different parameter choices can have on the processing properties, (3) how it processes sentences, and (4) how it solves the binding problem.

The insights gained in the project are three-fold. (1) The NBL model is suitable to connect to cognitively relevant language processing tasks which also means that the scientific understanding of neurobiology is sophisticated enough to build models that connect to a cognitive level. (2) It was found that processing limitations imposed by the neurobiological constraints of the model are different than may be naively assumed. The encoding with spike patterns without spatial or rate encoding proved to be powerful enough to encode a large number of stimuli. The generalization properties of the NBL model were such that relatively little training data was sufficient to extrapolate to a larger data set and even to novel contexts or novel words. At the same time, memory in the NBL model was limited. Only long neuronal or synaptic time-scales could provide sentence-spanning processing memory, while network size and connectivity did not contribute to memory. (3) The neurobiological properties of the model provided insights into possible implementations of processing aspects using biological neurons. Processing memory could be provided by neuronal adaptation and synaptic currents rather than recurrent connectivity. The binding problem could be solved through the rich dynamic representation of information that is present within the neural network.

Progress 2018

The work was primarily focused on documenting and validating earlier results to generate a coherent PhD thesis. However, new insights were gained in two research strands:
1) A neurobiological causal model - the NBL model - was found to have good generalization (systematicity) properties.
2) The properties of the NBL model were investigated with respect to binding tasks, such as correctly processing sentences with multiple instances of the same noun - problem-of-2 sentences. It was found that the NBL model could correctly identify the correct noun after receiving the Po2 sentences in 95% of the cases; i.e. the required binding problem was solved correctly.

Groundbreaking characteristics

Collaboration and integration of information across disciplines is instrumental to find the results presented here.
This project brings together high-level syntax and neurally plausible modelling of language, thereby crossing the disciplines linguistics, computational linguistics, psycholinguistics and neuroscience. The problem of developing recurrent networks (RNN) of spiking neurons for high-level syntax and semantics is one of the most challenging endeavours in language modelling.