Site icon Synced

Google Introduces NLP Model Understanding Tool

Artificial intelligence does a lot of things extremely well, but just how it does these things often remains unclear — shrouded by what’s come to be known as the “black box” problem. This is particularly true in NLP, where researchers can waste a lot of time trying to figure out what went wrong when their models don’t run as well as expected. Last week, Google Research released a paper tackling this issue with a new open-source analytic platform: the Language Interpretability Tool (LIT).

LIT is a toolkit and browser-based user interface (UI) for NLP model understanding. It has five major functions:

LIT user interface

The LIT UI is written in TypeScript and communicates with a Python backend that hosts models, datasets, counterfactual generators, and other interpretation components. Considering the continuous advancement of NLP models, Google researchers designed the LIT with five principles:

Built-in modules in the LIT

The Google researchers point out that their LIT interactive evaluation tool is not suitable for training-time monitoring or large datasets.

The paper The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models is on arXiv. The tool has been open-sourced on Github.


Analyst: Reina Qi Wan | Editor: Michael Sarazen; Fangyu Cai


Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle. Along with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.

Click here to find more reports from us.

Exit mobile version