Neural algorithmic reasoning, proposed in a 2021 paper by DeepMind researchers Veličković and Blundell as “the art of building neural networks that are able to execute algorithmic computation,” has pushed progress in solving algorithmic tasks. Most contemporary models however focus on solving a specific task, which leaves them unable to generate new knowledge based on existing observations, especially when the target knowledge cannot be obtained from the training data.
In the new paper A Generalist Neural Algorithmic Learner, a research team from DeepMind, University of Oxford, IDSIA, Mila, and Purdue University introduces a novel generalist neural algorithmic learner — a single graph neural network (GNN) capable of simultaneously solving various classical algorithms (e.g. sorting, searching, dynamic programming, path-finding and geometry) at single-task expert level.
The proposed generalist neural algorithmic learner is a single processor GNN model based on the encode-process-decode paradigm from the CLRS algorithmic reasoning benchmark introduced in June. At each time step of a particular task, the task-based encoder embeds inputs and the current hints (time series of the algorithms’ intermediate states) as high-dimensional vectors. These embeddings are then fed into a processor (the single GNN) to transfer the input node, edge and graph embeddings into processed node embeddings. Finally, the processed embeddings are decoded with a task-based decoder to predict the hints for the next step and outputs at the final step.
In their empirical study, the team compared their approach with state-of-the-art models on the CLRS-30 benchmark, where the proposed general neural algorithmic learner achieved 20 percent absolute improvements over prior best results.
Overall, this work validates the proposed general neural algorithmic learner’s ability to effectively incorporate reasoning capabilities across diverse tasks and match or exceed out-of-distribution (OOD) performance of single-task expert models. The team hopes their contributions can help scale neural algorithmic learning to new domains and applications.
The paper A Generalist Neural Algorithmic Learner is on arXiv.
Author: Hecate He | Editor: Michael Sarazen, Chain Zhang
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.
0 comments on “DeepMind, Oxford U, IDSIA, Mila & Purdue U’s General Neural Algorithmic Learner Matches Task-Specific Expert Performance”