AI Machine Learning & Data Science Nature Language Tech Research

Retrieving Reasoning Paths for Answering Complex Questions

In an attempt to equip the TF-IDF-based retriever with a state-of-the-art neural reading comprehension model, researchers introduced a new graph-based recurrent retrieval approach.

Question Answering (QA) is a task where a model is asked to answers questions in natural language given a number of source text documents. It is particularly challenging for machines to answer multi-hop open-domain questions, as they must collect multiple pieces of evidence scattered across multiple documents. This is difficult to do using common term-based retriever methods as the documents may have little lexical overlap or semantic relationship to the original question.

The most common approach for open-domain QA is to use non-parameterized models (such as TF-IDF or BM25) to retrieve a fixed set of documents, often from an open source such as Wikipedia. The answer range is then extracted by a neural reading comprehension model. Although these pipeline methods have been successful in single-hop QA, they cannot retrieve the essential evidence that is needed to answer multi-hop questions. Also, independently searching a fixed list of documents does not capture the relationship between the evidence documents through the bridge entities required for multi-hop inference.

In an attempt to equip the TF-IDF-based retriever with a state-of-the-art neural reading comprehension model (the most current open-domain QA approach), researchers from the University of Washington, Salesforce Research and the Allen Institute for Artificial Intelligence recently introduced a new graph-based recurrent retrieval approach. The trainable framework retrieves reasoning paths in paragraphs by formulating the task as a neural path search over a massive-scale Wikipedia document graph constructed from the relevant raw Wikipedia articles and their internal links.

Given the history of previously retrieved documents, the graph-based recurrent retriever sequentially retrieves each evidence document to form multiple inference paths in the entity graph. The graph-based recurrent retriever built on top of the existing reading comprehension model can then answer questions by ranking the retrieved reasoning paths. The powerful interaction between the retriever model and the reader model enables the entire method to answer complex questions by exploring more accurate inference paths compare to other methods.

ELCg8TQU4AAHolR.jpeg
Overview of the graph-based retriever-reader framework

The retriever was trained using annotated evidence paragraphs in a supervised manner with a “negative sampling + data augmentation and inference-time decoding” strategy. Multiple paragraphs and a single paragraph were provided for multi-hop QA questions and single-hop QA questions respectively. A ground-truth reasoning path was derived using the available annotated data in each dataset. To relax and stabilize the training process, researchers augmented the training data with additional reasoning paths which can derive the answer.

ELCiDkPU0AYZs7V.jpeg
HotpotQA development set results

The researchers evaluated their graph-based recurrent retriever using the open-domain Wikipedia-sourced datasets HotpotQA, SQuAD Open and Natural Questions Open — with the new approach significantly outperforming all previous SOTA methods.

The paper Learning to Retrieve Reasoning Paths Over Wikipedia Graph for Question Answering is on arXiv.


Author: Xuehan Wang | Editor: Michael Sarazen

0 comments on “Retrieving Reasoning Paths for Answering Complex Questions

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: