In a new paper, Yann LeCun and a Facebook AI and New York University research team introduce Barlow Twins, a novel self-supervised learning approach for computer vision.
Recent advances in self-supervised learning (SLL) for visual data show that training highly competitive image representations without manual labels is possible, and that this approach can sometimes even outperform supervised learning. Current SSL methods aim to learn representations that are invariant under different distortions, also referred to data augmentations, and typically do this by maximizing the similarity of representations from different distorted versions of a sample.
The LeCun team explains that trivial constant representations are a recurring issue with such approaches, which typically employ different mechanisms and careful implementation details to avoid collapsed solutions. The proposed Barlow Twins is an objective function that addresses this, measuring the cross-correlation matrix between the output features of two identical networks fed with distorted versions to make them as close as possible to the identity matrix while minimizing redundancy between concerned vector components.

Inspired by British neuroscientist Horace Barlow’s 1961 article Possible Principles Underlying the Transformation of Sensory Messages, the Barlow Twins method applies redundancy-reduction — a principle that can explain the organization of visual system — to self-supervised learning.

The Barlow Twins’ objective function is similar to SSL objective functions but includes key conceptual differences that lead to practical advantages compared to InfoNCE-based contrastive loss methods — namely that the Barlow Twins method does not require a large number of negative samples and can thus operate on small batches, and that it can take advantage of very high-dimensional representations.

The researchers evaluated the Barlow Twins representations via transfer learning to different datasets and computer vision tasks, and also tested the method on image classification and object detection, where the network was pretrained using self-supervised learning on the ImageNet ILSVRC-2012 dataset.




The results show that Barlow Twins outperforms previous state-of-the-art methods for self-supervised learning while being conceptually simpler and avoiding trivial constant (i.e. collapsed representations). The researchers believe the proposed method is just one possible instantiation of the information bottleneck principle applied to SSL, and that further algorithm refinements could lead to more effective solutions.
The paper Barlow Twins: Self-Supervised Learning via Redundancy Reduction is on arXiv.
Author: Hecate He | Editor: Michael Sarazen

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.
Pingback: [N] Yann LeCun Team’s Barlow Twins Method Boosts SSL in Image Representation via Redundancy Reduction – ONEO AI
Pingback: [N] Yann LeCun Team’s Barlow Twins Method Boosts SSL in Image Representation via Redundancy Reduction : artificial – Frankings meg
Nice topic. Thanks