Albert Einstein once said that “wisdom is not a product of schooling, but the lifelong attempt to acquire it.” Centuries of human progress have been built on our brains’ ability to continually acquire, fine-tune and transfer knowledge and skills. Such continual learning however remains a long-standing challenge in machine learning (ML), where the ongoing acquisition of incrementally available information from non-stationary data often leads to catastrophic forgetting problems.
Gradient-based deep architectures have spurred the development of continual learning in recent years, but continual learning algorithms are often designed and implemented from scratch with different assumptions, settings, and benchmarks, making them difficult to compare, port, or reproduce.
Now, a research and development team from ContinualAI with researchers from KU Leuven, ByteDance AI Lab, University of California, New York University and other institutions has proposed Avalanche, an end-to-end library for continual learning based on PyTorch.
Avalanche is designed to ease the implementation, assessment, and replication of continual learning algorithms across different settings while promoting the reproducibility of results from previous studies. The team believes the library can help researchers and practitioners in a number of ways: 1) Write less code, prototype faster and reduce errors; 2) Improve reproducibility; 3) Improve modularity and reusability; 4) Increase code efficiency, scalability and portability; 5) Augment impact and usability of research products.
The researchers summarize their work as:
- Propose a general continual learning framework that provides the conceptual foundation for Avalanche.
- Discuss the general design of the library based on five main modules: Benchmarks, Training, Evaluation, Models, and Logging.
- Release the open-source, collaboratively maintained project at GitHub, as the result of a collaboration involving over 15 organizations across Europe, the United States and China.
Avalanche’s design is based on five principles: 1) Comprehensiveness and Consistency; 2) Ease-of-Use; 3) Reproducibility and Portability; 4) Modularity and Independence; 5) Efficiency and Scalability.
Comprehensiveness means providing continual learning with an exhaustive and unifying library with end-to-end support. A comprehensive codebase provides a unique and clear access point to researchers and practitioners, coherent and easy interaction across modules and sub-modules, and promotes the consolidation of a large community able to provide support for the library.
To improve Avalanche’s ease-of-use, the researchers have provided an intuitive Application Programming Interface (API), an
official website, and rich documentation with comprehensive explanations and vivid executable examples on notebooks.
Avalanche enables researchers to easily integrate their own research into a shared codebase to compare their solution with previous results, and speeds up the development process — thus securing both reproducibility and portability. Regarding modularity and independence, Avalanche guarantees the stand-alone usability of individual module functionalities and facilitates learning of a particular tool.
Last but not least, Avalanche offers the end-user a seamless and transparent experience across various hardware platforms and use-cases, which keeps the continual learning models efficient and scalable.
The library is organized in five main modules: Benchmarks, Training, Evaluation, Models, and Logging. The Benchmarks module maintains a uniform API for data handling and contains all the major continual learning benchmarks. The Training module includes simple and efficient ways of implementing new continual learning strategies and a set of pre-implemented continual learning baselines and state-of-the-art algorithms. The Evaluation module provides all the utilities and metrics for continual learning evaluation. The Models module offers a set of simple machine learning architectures, including versions of feedforward and convolutional neural networks and a pretrained version of MobileNet (v1). The Logging module includes advanced logging and plotting features, such as native stdout (standard output), textual files and TensorBoard support.
The current Avalanche Alpha version focuses on continual supervised learning for computer vision tasks. The team hopes the library can advance research on grand challenges in frontier topics such as continual learning. They have also built a website and are hosting a meetup to discuss this topic.
Founded in 2018 by University of Pisa Assistant Professor Vincenzo Lomonaco, ContinualAI is a non-profit research organization and open community on continual learning for AI. The ContinualAI Meetup is on YouTube, and the Avalanche website is here.
The paper Avalanche: an End-to-End Library for Continual Learning is on arXiv.
Author: Hecate He | Editor: Michael Sarazen
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.