AI Machine Learning & Data Science Nature Language Tech Research

DeepMind’s Selection-Inference Language Model System Generates Humanly Interpretable Reasoning Traces

In the new paper Faithful Reasoning Using Large Language Models, a DeepMind research team proposes a forward-chaining selection-inference model that performs faithful reasoning and provides a valid reasoning trace to improve reasoning quality and help users validate the model’s final answers.

Explainability is one of the most pressing concerns in machine learning research and development. Although contemporary large-scale language models (LMs) have demonstrated impressive question-answering capabilities, their inherent opacity can conceal just how these models reach their final answers, making it difficult for users to spot any possible mistakes or justify the outputs.

A DeepMind research team addresses this issue in the new paper Faithful Reasoning Using Large Language Models, proposing a forward-chaining selection-inference model that can perform faithful reasoning and provide a valid reasoning trace to improve reasoning quality and help users check and validate the final answers.

The proposed approach is based on the idea that LMs can perform faithful multi-step reasoning if the underlying logical structure of a given problem can be mirrored by a causal structure. To realize this, the team developed selection-inference (SI) as their system’s backbone, a novel architecture comprising two fine-tuned language models: one for selection and one for inference.

The step-wise forward reasoning backbone splits each reasoning step into two: 1) Given a question, the selection model first chooses a set of statements from the context, and 2) The inference model then computes a statement from the selection to predict an entailment (the inference). This inference is added to the context to conclude a single reasoning step. By iterating this SI process, the model is able to produce a reasoning trace, while the final inference is used to answer the question.

To enable the model to determine when to stop reasoning, the team introduces a two-stage halter that leverages a fine-tuned language model for predicting whether the model can answer a given question given the current inference. If the model cannot confidently answer the question, it will proceed to another SI iteration; if the halter’s output is an answer, the process will be terminated, and the answer returned. If the SI cycle continues for a pre-specified number of iterations without reaching an answer, the system returns “unknown” rather than making a best guess.

The researchers observed a notable increase in performance when questions the model decided it could not faithfully answer were removed, and believe this approach can contribute to increased trust and safety of models when deployed in the real world, “where precision (rather than recall) is a priority.”

In their empirical study, the team compared their SI system with baseline models on the Proof Writer (PW) and EntailmentBankQA (EB) datasets. The proposed model achieved 88.1 percent and 78.1 percent final answer accuracy on PW and EB, respectively, outperforming baseline models by a large margin.

Overall, this work shows that the proposed method can faithfully answer questions using multi-step reasoning without sacrificing model performance. While the study focused on multi-step reasoning within a given context, the team plans to incorporate retrieval to populate the context in future work.

The paper Faithful Reasoning Using Large Language Models is on arXiv.


Author: Hecate He | Editor: Michael Sarazen


We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

1 comment on “DeepMind’s Selection-Inference Language Model System Generates Humanly Interpretable Reasoning Traces

  1. Harry Kane

    The information is very special, I will have to follow you.
    https://cuphead.onl

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: