AI Machine Learning & Data Science Research

Princeton, DeepMind & NYU Research Distills Symbolic Representations of DL Models Using Inductive Biases

A team of researchers from Princeton, DeepMind and New York University have introduced a new method that extracts symbolic representations from deep learning models by introducing strong induction biases.

A team of researchers from Princeton, DeepMind and New York University have introduced a new method that extracts symbolic representations from deep learning models by introducing strong induction biases. The method imposes motivated inductive biases on GNN (Graph Neural Networks) and Hamiltonian GNs to learn interpretable representations that can improve zero-shot generalization (the ability to generalize to new things without previous knowledge or training).

In machine learning, symbolic models composed of closed-form symbolic expressions provide advantages such as compact algebraic expressions, clear interpretation and good generalization ability. It can however be challenging to discover these algebraic expressions. Symbolic regression, a supervised machine learning technique, is one way to do so. Symbolic regression however usually requires the use of genetic algorithms which grow exponentially large with the number of input variables and operators. Many machine learning problems — especially those in high dimensions — therefore remain intractable for traditional symbolic regression approaches.

There are also deep learning methods that can effectively train complex models on high-dimensional datasets. These learned models however are virtual black boxes which are hard to interpret and explain. Also, if prior knowledge about the data is unavailable, generalization will be challenging.

Screen Shot 2020-06-24 at 3.53.04 PM.png

The researchers propose a general deep learning model that can leverage the advantages of both symbolic regression and deep learning. The model has a separable internal structure that provides an inductive bias motivated by a real-world problem. In the case of interacting particles, the researchers choose GNNs as the induction bias for their architecture. Researchers used the available data to train the model end-to-end and adapted symbolic expressions to the model’s internal features.

The researchers propose that using the symbolic expressions to replace functions in the deep learning model could possibly uncover new symbolic expressions for non-trivial datasets.

Screen Shot 2020-06-24 at 4.54.44 PM.png
Overview of researchers’ implementation and exploitation of inductive bias on GNNs.

Applying a Flattened Hamiltonian Graph Network enabled researchers to learn symbolic forms of pairwise interaction energies. The GNNs’ substructure also made it possible to describe the learned representations and computations with more fine-grained interpretations.

The paper Discovering Symbolic Models from Deep Learning with Inductive Biases is on arXiv.


Author: Herin Zhao | Editor: Michael Sarazen

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

3 comments on “Princeton, DeepMind & NYU Research Distills Symbolic Representations of DL Models Using Inductive Biases

%d bloggers like this: