Sorting a bunch of differently coloured toy trucks and action figures seems like child’s play, right? Unfortunately this remains a challenging task in the world of machine learning. So why not have humans simply show the machines how to do it?
This is the inspiration behind a new research project led by Stanford Artificial Intelligence Lab Director Fei-Fei Li and her husband, Stanford Associate Professor Silvio Savarese. The project introduces two new global platforms — RoboTurk and Surreal — designed to provide high-quality task demonstration data to help researchers working in robotic manipulation.
RoboTurk is a crowdsourcing platform that is collecting human demonstrations of tasks such as “picking” and “assembly”; while Surreal is an open-source reinforcement learning framework that accelerates the machines’ learning process.
The “humans teaching robots” concept itself is not a new one. Recent advances in imitation learning have demonstrated the possibility for applications in robotic manipulation tasks. Last year OpenAI created a robotics system that can learn behaviors and actions from a single human demonstration in a virtual reality environment, then replicate that in the real world. Berkeley Artificial Intelligence Research (BAIR) meanwhile recently presented One-Shot Imitation from Watching Videos, a training process which enables robots to learn skills from a human example video, and integrate what it has learned with its previous understanding of the target objects.