AI Hot Machine Learning & Data Science Nature Language Tech

How Smart is BERT? Evaluating the Language Model’s Commonsense Knowledge

Researchers dive deep into the large language model to discover how it encodes the structured commonsense knowledge it leverages on downstream commonsense tasks.

In the new paper Does BERT Solve Commonsense Task via Commonsense Knowledge?, a team of researchers from Westlake University, Fudan University and Microsoft Research Asia dive deep into the large language model to discover how it encodes the structured commonsense knowledge it leverages on downstream commonsense tasks.

image.png

The proven successes of pretrained language models such as BERT on various downstream tasks has stimulated research investigating the linguistic knowledge inside the model. Previous studies have revealed shallow syntactic, semantic and word sense knowledge in BERT, however, the question of how BERT deals with commonsense tasks has been relatively unexamined.

CommonsenseQA is a multiple-choice question answering dataset built upon the CONCEPTNET knowledge graph. The researchers extracted multiple target concepts with the same semantic relation to a single source concept from CONCEPTNET, where each question has one of three target concepts as the correct answer. For example, “bird” is the source concept in the question “Where does a wild bird usually live?” and “countryside” is the correct answer from the possible target concepts “cage,” “windowsill,” and “countryside.”

image.png
image.png

The researchers examined the presence of commonsense knowledge in BERT sentence representations by investigating links from an answer concept to the related question concept, proposing that such relations in BERT representations reflect the use of commonsense knowledge, aka structured commonsense knowledge.

image.png
image.png

The team designed two sets of experiments: one to examine the strengths of the commonsense link in understanding commonsense knowledge in the representation, and another to evaluate the correlation between commonsense links with model prediction performance. They concluded that BERT indeed does indeed have commonsense knowledge from pretraining, and note that with fine-tuning, it will rely more on commonsense information when delivering predictions. Strong commonsense links and a notable correlation between model predictions and commonsense link strengths prove the finding. The team also observed that model accuracy improves when the structured commonsense knowledge is stronger.

The researchers say this is the first study to show that BERT makes use of commonsense knowledge when solving CommonsenseQA questions and that fine-tuning can further enable BERT to learn to use its commonsense knowledge on higher levels. They hope their results will encourage further exploration and leveraging of BERT’s underlying mechanisms.

The paper Does BERT Solve Commonsense Task via Commonsense Knowledge? is on arXiv.


Reporter: Fangyu Cai | Editor: Michael Sarazen



Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon KindleAlong with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.

Click here to find more reports from us.


We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

1 comment on “How Smart is BERT? Evaluating the Language Model’s Commonsense Knowledge

  1. alain collet

    interested

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: