AI Machine Learning & Data Science Research

Novel Hybrid Continual Learning Algorithm Counters Agent Forgetfulness

Researchers have introduced a novel hybrid continual learning algorithm, Adversarial Continual Learning, which aims to enable the persistent explicit or implicit replay of experiences by storing original samples

If a human can play the guitar they can draw on some of those core skills when learning to play a zither. In machines, continual learning is designed to enable machines to do something like that — learn new tasks without forgetting previously learned ones. When new tasks are encountered, general strategies inform basic skills that can be used to perform task-specific learning.

Researchers aim to train artificial learning agents on the ability to do tasks sequentially under different conditions by developing task-specific and task-invariant skills. However, these approaches are not able to scale well to a large number of tasks due to the limited amount of memory available for each task.

A team from Facebook AI Research and UC Berkeley recently introduced a novel hybrid continual learning algorithm, Adversarial Continual Learning (ACL), which aims to enable the persistent explicit or implicit replay of experiences by storing original samples. The ACL method learns the task-specific or private latent space for each task and a task-invariant or shared feature space for all tasks to enhance better knowledge transfer as well as better recall of previous tasks. The model incorporates architectural growth to prevent the forgetting of task-specific skills, and uses an experience replay approach to preserve shared skills.

截屏2020-03-24下午4.24.39.png
Adversarial Continual Learning (ACL) algorithm

Catastrophic forgetting can occur when representations learned through a series of tasks change to facilitate the learning of the current task, which leads to performance degradation. The ACL method breaks down a conventional single representation learned for a series of tasks into two parts: task-specific features and the core structure of all tasks.

To prevent the catastrophic forgetting of task-specific features, the ACL method uses compact modules that can be stored in memory. If the factorization approach is successful, the core structure can maintain a high degree of immunity. However, scientists have empirically found that the entanglement problem cannot be completely solved, as there is either little between tasks or the domain transfer is too large. The use of tiny replay frames containing a small number of old data samples is conducive to retaining higher accuracy and reducing forgetting.

The method was evaluated on commonly used benchmark datasets for T-split class-incremental learning, and established a new state of the art on 20-Split miniImageNet, 5-Datasets, 20-Split CIFAR100, Permuted MNIST, and 5-Split MNIST. The results show that adversarial learning can unlock shared and private potential representations along with orthogonality constraints so that compact private modules can be stored into memory to effectively preventing forgetting.

The paper Adversarial Continual Learning is on arXiv.


Author: Xuehan Wang | Editor: Michael Sarazen

2 comments on “Novel Hybrid Continual Learning Algorithm Counters Agent Forgetfulness

  1. Thank you very much.

  2. thanks for the last information

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: