The idiom “actions speak louder than words” first appeared in print almost 300 years ago. A new study echoes this view, arguing that combining self-supervised and offline reinforcement learning (RL) could lead to a new class of algorithms that understand the world through actions and enable scalable representation learning,
Machine learning (ML) systems have achieved outstanding performance in domains ranging from computer vision to speech recognition and natural language processing, yet still struggle to match the flexibility and generality of human reasoning. This has led ML researchers to search for the “missing ingredient” that might boost these systems’ ability to understand, reason and generalize.
In the paper Understanding the World Through Action, UC Berkeley assistant professor in the department of electrical engineering and computer sciences Sergey Levine suggests that a general, principled, and powerful framework for utilizing unlabelled data could be derived from RL to enable ML systems leveraging large datasets to better understand the real world.
Several hypotheses have been advanced to address this “missing ingredient” question in ML systems, such as causal reasoning, inductive bias, and better algorithms for self-supervised or unsupervised learning. Levine says that while the problem is challenging and involves a great deal of guesswork, recent progress in AI can provide some guiding principles: 1) The “unreasonable” effectiveness of large, generic models supplied with large amounts of training data; 2) How manual labelling and supervision do not scale nearly as well as unsupervised or self-supervised learning.
Levine believes the next bottleneck facing ML researchers involves deciding how to train large models without manual labelling or manual design of self-supervised objectives so as to acquire models that distill a deep and meaningful understanding of the world and are able to perform downstream tasks with robust generalization and even a degree of common sense.
To achieve this goal, autonomous agents will require an understanding of their environments that is causal and generalizable. Such agents would advance beyond the current RL paradigm, where 1) RL algorithms require a task goal (i.e., a reward function) to be specified by experts; and 2) RL algorithms are not inherently data-driven, but rather learn from online experience, an approach that limits both generalization ability and the ability to learn about how the real world works.
Levine envisions algorithms that, rather than aiming at a single user-specified task, seek to accomplish whatever outcomes they infer are possible in the real world. He proposes developing offline RL algorithms that can effectively utilize previously collected datasets to enable a system that can use its training time to learn and perform user-specified tasks while also using its collected experience as offline training data to learn to achieve a wider scope of outcomes.
Levine believes offline RL has the potential to significantly increase the applicability of self-supervised RL methods, and can be utilized in combination with goal-conditioned policies to learn entirely from previously collected data.
Overall, the paper explores how self-supervised RL combined with offline RL could realize scalable representation learning. Self-supervised training can enable models to understand how the world works, and fulfilling self-supervised RL objectives can allow models to gain a causal understanding of the environment. Such techniques must be applicable at scale to real-world datasets, a challenge met by offline RL, which enables the use of large, diverse previously collected datasets.
The paper Understanding the World Through Action is on arXiv.
Author: Hecate He | Editor: Michael Sarazen
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.