Hot Machine Learning & Data Science Nature Language Tech Research

Google Introduces NLP Model Understanding Tool

Google Research released a paper tackling this issue with a new open-source analytic platform: the Language Interpretability Tool (LIT).

Artificial intelligence does a lot of things extremely well, but just how it does these things often remains unclear — shrouded by what’s come to be known as the “black box” problem. This is particularly true in NLP, where researchers can waste a lot of time trying to figure out what went wrong when their models don’t run as well as expected. Last week, Google Research released a paper tackling this issue with a new open-source analytic platform: the Language Interpretability Tool (LIT).

LIT is a toolkit and browser-based user interface (UI) for NLP model understanding. It has five major functions:

  • Supports local explanation, including salience maps, attention, and rich visualizations of model prediction
  • Supports aggregate analysis, including metrics, embedding spaces, and flexible slicing
  • Allows switching seamlessly between the above to test local hypotheses and validate over a dataset
  • Allows new data points to be added at any time and visualizes their effect immediately
  • Allows visualizing comparisons between two models or two data points on the same interface
image
LIT user interface

The LIT UI is written in TypeScript and communicates with a Python backend that hosts models, datasets, counterfactual generators, and other interpretation components. Considering the continuous advancement of NLP models, Google researchers designed the LIT with five principles:

  • Flexible to support a wide range of NLP tasks, including classification, seq2seq, language modelling and structured prediction
  • Extensible so that it can be reconfigured and extended for newly added workflows
  • Modular with portable independent components to select from based on particular needs
  • Framework agnostic, works with any model that can run from Python
  • Easy to use with only a small amount of code required
Screen Shot 2020-08-17 at 10.43.36 AM.png
Built-in modules in the LIT

The Google researchers point out that their LIT interactive evaluation tool is not suitable for training-time monitoring or large datasets.

The paper The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models is on arXiv. The tool has been open-sourced on Github.


Analyst: Reina Qi Wan | Editor: Michael Sarazen; Fangyu Cai


B4.png

Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle. Along with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.

Click here to find more reports from us.

3 comments on “Google Introduces NLP Model Understanding Tool

  1. Analitic platforms destroys our life couse it s a fake simulator, with some predictions.. bullshit from you

  2. Pingback: Google Introduces NLP Model Understanding Tool - GistTree

  3. Very Nice.. Article

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: