Meta-learning, or learning to learn, enables machines to learn new skills or adapt to new environments rapidly with only a few training examples. Meta-learning is expected to play an important role in the development of next-generation AI models, and as such there is increasing interest in improving the performance of meta-learning algorithms.
The conventional wisdom in the machine learning research community is that meta-learning performance will improve when models are trained on more diverse tasks. A research team from Mila, Québec Artificial Intelligence Institute, Université de Montréal, CIFAR and IVADO Labs challenges this assumption in the new paper The Effect of Diversity in Meta-Learning, arguing that repeating the same tasks over the training phase can achieve performance similar to models trained on uniform sampling.
The team summarizes their main contributions as:
- We show that, against conventional wisdom, task diversity does not significantly boost performance in meta-learning. Instead, limiting task diversity and repeating the same tasks over the training phase allows the model to obtain performances similar to models trained on a Uniform Sampler without any adverse effects.
- We also show that increasing task diversity using sophisticated samplers such as DPP or Online Hard Task Mining (OHTM) Samplers do not significantly boost performance. Instead, the dynamic-DPP Sampler harms the model due to the increased task diversity.
- We empirically show that repeating tasks over the training phase can perform similarly to a model trained on the Uniform Sampler, achieving similar performance with only a fragment of data. This key finding questions the need to increase the support set pool to improve the model’s performance and the efficiency of the models as the excess data does not seem to provide any additional boost in performance.
- This brings into question the efficiency of the model and the advantage it gains with access to more data using samplers such as the standard sampling regime – Uniform Sampler. If we can achieve similar performances with fewer data, the existing models have not taken advantage of the excess data it is provided with.
The study’s findings were obtained mainly through empirical study, with the researchers looking at different task distributions across six models (MAML, Reptile, Protonet, Matching Networks, MetaOptNet, and CNAPs), using the Omniglot, miniImageNet, tieredImageNet and Meta- datasets to evaluate the effect of task diversity on meta-learning.
The team experimented with eight distinct task samplers, each offering a different level of task diversity: a Uniform Sampler that creates a new task by uniformly sampling classes; a No Diversity Task Sampler that uniformly samples one set of the task at the beginning and propagates the same task across all batches and meta-batches; a No Diversity Batch (NDB) Sampler that uniformly samples one set of tasks for batch one and propagates the same tasks across all other batches; a No Diversity Tasks per Batch (NDTB) Sampler that uniformly samples one set of tasks for a given batch and propagates the same tasks for all meta-batches; a Single Batch Uniform Sampler that sets the meta-batch size to one; an Online Hard Task Mining (OHTM) Sampler that applies the OHEM sampler (Shrivastava et al. (2016) for half the meta-batch size and a uniform sampler for the remaining half; a Static Determinantal Point Processes (DPP) Sampler that samples the most diverse tasks based on class embeddings; and a Dynamic DPP Sampler that samples tasks with a uniform sampler until the model becomes sufficiently trained.
The researchers identify the No Diversity Batch, No Diversity Tasks per Batch, Uniform, OHTM, and s-DPP Samplers as high-performance; and the No Diversity Task, Single Batch Uniform and d-DPP Sampler as low-performance, with these performance trends shared across all datasets and models.
Based on their extensive experiments, the team concludes that limiting task diversity and repeating the same tasks over the training phase will attain meta-learning performance similar to using the Uniform Sampler. They observe that when using low-diversity task samplers such as the NDTB and NDB, a model trained on even a tiny data fragment can perform similarly to a model trained using the Uniform Sampler; and that even sophisticated samplers such as the OHEM or DPP do not offer any significant boost in model performance. They further note that increasing task diversity using the d-DPP Sampler can actually hamper meta-learning model performance.
Overall, the study questions the need to increase the support set pool to improve model performance and demonstrates that task diversity does not lead to any significant performance boost in meta-learning. The researchers hope their paper can help establish groundwork and rules for task sampling approaches in meta-learning and encourage further research in this area.
The paper The Effect of Diversity in Meta-Learning is on arXiv.
Author: Hecate He | Editor: Michael Sarazen
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.