Tag: self-supervised learning

AI Computer Vision & Graphics Machine Learning & Data Science Research

Maximizing FLOPS Utilization: DeepMind & NYU Propose Efficiency Evaluations for Visual Pretraining Methods

In the new paper Where Should I Spend My FLOPS? Efficiency Evaluations of Visual Pre-training Methods, DeepMind and NYU Center for Neural Systems researchers introduce computational efficiency evaluation approaches designed to aid in the selection of optimal methods, datasets and models for pretraining visual tasks on a fixed FLOP budget.

AI Machine Learning & Data Science Research

Wav2Vec 2.0 Learns Brain-Like Representations From Just 600 Hours of Unlabeled Speech Data in New Study

In the new paper Toward a Realistic Model of Speech Processing in the Brain with Self-supervised Learning, researchers show that self-supervised architectures such as Wav2Vec 2.0 can learn brain-like representations from as little as 600 hours of unlabelled speech; and can also learn sound-generic and speech- and language-specific representations similar to those of the prefrontal and temporal cortices.

AI Machine Learning & Data Science Research

Meta AI Extends MAEs to Video for Self-Supervised Representation Learning With Minimal Domain Knowledge

In the new paper Masked Autoencoders As Spatiotemporal Learners, a Meta AI research team extends masked autoencoders (MAE) to spatiotemporal representation learning for video. The novel approach introduces negligible inductive biases on space-time while achieving strong empirical results compared to vision transformers (ViTs) and outperforms supervised pretraining by large margins.

AI Computer Vision & Graphics Machine Learning & Data Science Popular Research

Pushing the Limits of Self-Supervised ResNets: DeepMind’s ReLICv2 Beats Strong Supervised Baselines on ImageNet

A DeepMind research team proposes ReLICv2, which demonstrates for the first time that representations learned without labels can consistently outperform a strong, supervised baseline on ImageNet and even achieve comparable results to state-of-the-art self-supervised vision transformers (ViTs).

AI Computer Vision & Graphics Machine Learning & Data Science Research

Facebook AI & JHU’s MaskFeat Method Surpasses Kaiming He’s MAE, Sets New SOTA in Video Action Recognition

In the new paper Masked Feature Prediction for Self-Supervised Visual Pre-Training, a Facebook AI Research and Johns Hopkins University team presents a novel Masked Feature Prediction (MaskFeat) approach for the self-supervised pretraining of video models that achieves SOTA results on video benchmarks.

AI Machine Learning & Data Science Research

UC Berkeley’s Sergey Levine Says Combining Self-Supervised and Offline RL Could Enable Algorithms That Understand the World Through Actions

In the new paper Understanding the World Through Action, UC Berkeley assistant professor in the department of electrical engineering and computer sciences Sergey Levine argues that a general, principled, and powerful framework for utilizing unlabelled data can be derived from reinforcement learning to enable machine learning systems leveraging large datasets to understand the real world.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Apple Study Reveals the Learned Visual Representation Similarities and Dissimilarities Between Self-Supervised and Supervised Methods

An Apple research team performs a comparative analysis on a contrastive self-supervised learning (SSL) algorithm (SimCLR) and a supervised learning (SL) approach for simple image data in a common architecture, shedding light on the similarities and dissimilarities in their learned visual representation patterns.