When designing algorithms, researchers tend to focus on improving two key features: correctness (producing the right outputs) and efficiency (doing so with minimal resources). There are a number of associated challenges — accuracy typically cannot surpass the training environment of a given dataset, while techniques for increasing efficiency have remained relatively unexplored in most previous studies.
UK-based AI company DeepMind recently introduced a new approach designed to improve the generalizability (correctness beyond the training distribution) and efficiency of algorithms represented by neural networks. The researchers propose that properly setting up the input and output interface of a neural network and making good use of supervised learning should be central to tackling generalization and efficiency challenges. Their research applies a neural program induction paradigm to learn neural networks to represent algorithms in solving tasks.

In existing systems, the input size is usually problem-dependent at each stage and will grow linearly through steps. This approach however not only increases the compute cost at each step, but can also decrease the system’s ability to generalize to new unseen sequences or sizes as it becomes accustomed to the sequence/size encountered during training.
The DeepMind researchers found that using a constant partial input size can help the system to generalize outside of the training distribution while also preventing the problem of expanding compute cost.
Although a typical instruction set can be either task-related or task agnostic, the researchers designed a novel instruction set that includes a list of arguments specifying how and where to execute each instruction to represent a variety of algorithms and define a structured action space.

The team tested their approach with sorting tasks, a standard for neural program induction models. In comparisons with previous methods that use a full view instead of the proposed interface view, the new approach was shown to have sufficient generalizability in both smaller and larger instance sizes, while the full view models performed relatively poorly outside of the training range.
Taking this altered neural network, the researchers then used reinforcement learning (RL) to further optimize the model, aiming to surpass the teacher, i.e. supervised learning. To deal with size challenges, researchers added imitation loss and reward shaping, where the reward increases as the system uses fewer steps to solve a task. This was found to improve algorithm efficiency.

The experiment results show that in the same sorting task, when all models achieve a 100 percent solve rate across all instance sizes, the RL + imitation setting has the overall shortest length and lowest compute consumption, outperforming the teacher policies. Pure RL also works better with smaller instance sizes, as shown in the results for RL from scratch without imitation.
The researchers identify a number of limitations in the approach and include proposals for future work, suggesting a universal instruction set could be designed for application across all models.
The paper Strong Generalization and Efficiency in Neural Programs is on arXiv.
Author: Reina Qi Wan | Editor: Michael Sarazen & Yuan Yuan

Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors
This report offers a look at how the Chinese government and business owners have leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle.
Click here to find more reports from us.
We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Pingback: [R] DeepMind Explores Generalization and Efficiency in Algorithm Design – tensor.io