Algorithms are everywhere. They detail the specific instructions that computers need to carry out tasks, from self-driving vehicles to recommendation systems and even in your microwave oven. The ability of algorithms to automate and engineer systems that reason has made them a cornerstone of contemporary society. Although full neural network models can also serve as task solvers and utilize additional information from data to tailor existing algorithms to real-world problems, a trade-off is that these systems sacrifice generalization ability.
In a new paper, a research team from DeepMind explores how neural networks can be fused with algorithmic computation and demonstrates an elegant neural end-to-end pipeline that goes straight from raw inputs to general outputs while emulating an algorithm internally.
Algorithms typically come with strong general guarantees and are the basis for software engineering across countless domains. The invariances of an algorithm can be stated as a precondition (specify what kind of input it expects) and a postcondition (what the algorithm can then guarantee about its outputs after execution). Despite their guarantees, algorithms are inflexible re the problem being tackled. Conversely, neural networks that work on a given problem instance can not guarantee to generalize to some larger instances, but can adapt to a wider range of problems.
In get the best of both worlds, previous studies have attempted to combine algorithms and deep learning. Approaches have included training deep learning models using existing algorithms as fixed external tools; teaching deep neural networks to imitate the workings of an existing algorithm by producing the same output; and using multiple known algorithms and the abstract commonalities among them to enable algorithms to be derived.
Typically, a real-world problem will first be fitted to a known problem class, and then an appropriate algorithm will be chosen to solve the problem. Algorithms are used to reason about problems in an abstract space to make it easier to build theoretical collections between the target problem and the known problem class. However, this kind of abstraction often involves drastic information loss, reducing the system’s ability to accurately portray the dynamics of the real world. To circumvent this issue, the DeepMind researchers applied deep learning to replace manual feature extraction from raw data, resulting in significant performance gains.
The idea behind algorithmic reasoning is to build algorithmically-inspired neural networks that can execute an algorithm from abstractified inputs. Following this schema, the proposed neural end-to-end pipeline is designed to emulate an algorithm internally and go straight from raw inputs to general outputs. More specifically, given natural inputs that are often high-dimensional, noisy and prone to changing rapidly, the proposed method first trains an algorithmic reasoner to imitate the algorithm. This yields encoder and decoder functions that can carry data to and from the latent space of the processor network. Appropriate encoder and decoder neural networks are then set up to process raw data and produce expected outputs. Finally, the encoder and decoder functions are swapped from the algorithmic reasoner for the appropriate encoder and decoder neural networks respectively, and learn their parameters by gradient descent.
The researchers say their neural algorithmic reasoning pipeline offers a strong approach to applying algorithms on natural inputs. Such a neural algorithmic reasoning blueprint has already proven useful across a range of domains, including reinforcement learning and genome assembly. The team believes the proposed neural algorithmic reasoning has a transformative potential for running classical algorithms on inputs previously considered inaccessible.
The paper Neural Algorithmic Reasoning is on arXiv.
Author: Hecate He | Editor: Michael Sarazen
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.