Tenure Track 2

(last update 2020-05-12)

Hierarchical structure in natural language: bridging computational linguistics, neurobiology and formal semantics

Senior researcher: Willem Zuidema
PhD: Dieuwke Hupkes

Research description

A unique feature of human language – and the key property that arguably sets it apart in nature – is its hierarchical compositionality: all languages allow meaningful atomic units (e.g. words) to be combined into larger units, and these larger units to be further combined to form meaningful sentences. 

The work performed within this tenure track investigates:

(1) How corpus data and computational models can be used to study the nature of this combinatory system,

(2)  How such a system may be implemented in a neural architecture,

(3) How the system may be learned by children and machines, and

(4)   How such a system emerged in evolution.

Highlights

Highlight 1: Generalized Contextual Decomposition

Team members: Dieuwke Hupkes and Willem Zuidema

Deep learning models such as neural networks are now used in many applications employed in the public domain. It is thus vital to thoroughly understand how such models operate and what kind of biases they encode. However, the “black-box” nature of neural networks makes them notoriously difficult to interpret. In this study, we investigated what happens inside neural language models.

Although it has become clear in recent years that these models are highly proficient in processing language, how they are able to do this is still largely unknown. We argue that investigating a model’s output is no longer sufficient to obtain a thorough and qualitative understanding. Existing techniques, including some of our own, require specific hypotheses.

In this new work, we explored a data-driven analysis method called Contextual Decomposition. We proposed a generalised version of this method -- Generalised Contextualised Decomposition --  which we then employed to investigate the behaviour of language models. In particular, we evaluated tasks pertaining to syntactic agreement and coreference resolution and discovered that the model strongly relies on a default reasoning effect to perform these tasks. By applying these techniques to analyse language models, we are able to gain unprecedented insights into the ways these models operate.

Figure 1. Average contributions for the NounPP corpus of Lakretz et al. (2019). INIT denotes the contribution of the initial states. The picture depicts an asymmetry in the way that the model encodes singularity and plurality: while plural verbs depend strongly on the subject, for singular sentences this is not the case.

Progress Update 2019

Much progress has been made on understanding how a neural architecture may represent the hierarchical structure of language and how these representations might be learned. In August 2019, we organized a successful week-long Lorentz workshop this question. Multiple papers on this topic were published by the senior researcher and the PhD candidate.

An important new development was the focus on data-driven interpretability techniques for neural language models. We developed a technique called Generalized Contextual Decomposition. We used it to analyse the flow of information in various neural language models. We demonstrated a default reasoning effect, showing that ‘singular’ is assumed by default in models keeping track of number agreement between subjects and verbs in a sentence. And we then generalized these findings to gendered pronouns in coreference constructions. Results showed that current models assume ‘male’ by default, while requiring explicit evidence that a referent is female in order to predict a female pronoun. This project revealed a gender bias in current neural language models.

Synergy with Big Questions

The strand of work of Zuidema in collaboration with Abnar (PhD) and Beinborn (Postdoc) falls within the team efforts of BQ1. This work focuses on predicting brain activity associated with words and sentences, using the MVPA framework. This project nicely complements the work described above; together these projects relate language behaviour to brain activations: thanks to diagnostic classification it is better understood what strategies neural networks implement, and thanks to the MVPA-work the extent to which the neural networks are good models of computation happening in the brain can be evaluated.

Thus, together these projects contribute greatly to several key issues and the overarching quest of the LiI consortium, in particular the ambition to bridge the neurobiological and linguistic levels of description (BQ1). The project combines insights from computational linguistics, neurobiology and formal semantics to create completely new models that are adequate both at linguistic and neural levels of description. Zuidema also contributed to the plans for BQ5.

Innovativeness and Interdisciplinarity

This work contributes to several key issues of the language in interaction consortium, in particular the ambition to bridge the neurobiological and linguistic levels of description. The project combines insights from computational linguistics, neurobiology and formal semantics to create a completely new models that are adequate both at linguistic and neural levels of description.