AI Computer Vision & Graphics Machine Learning & Data Science Research

Robot Lovers Rejoice! Fei-Fei Li Stanford Team Crowd-Sources World’s Largest Robot Manipulation Dataset

The crowdsourcing produced 111.25 hours of video from 54 non-expert demonstrators to build “one of the largest, richest, and most diverse robot manipulation datasets ever collected using human creativity and dexterity.”

Two years ago, Stanford Artificial Intelligence Lab Director Fei-Fei Li and her team launched the global platforms RoboTurk and Surreal to crowd-source high-quality task demonstration data that could help researchers working in robotic manipulation.

Now, the wait is over. The Stanford Vision and Learning Lab announced this week that the RoboTurk Real Roboto Dataset is available as a free download. The crowdsourcing produced 111.25 hours of video from 54 non-expert demonstrators to build “one of the largest, richest, and most diverse robot manipulation datasets ever collected using human creativity and dexterity.

image.png

Participants used smartphones and browsers to access the original RoboTurk crowdsourcing platform, where they could remotely control robot simulations in real time. The Stanford researchers later extended RoboTurk to collect data on actual hardware, incorporating three Sawyer robot arms. The process is simple: users watch a live video stream of the robot workspace as they control the arm to complete tasks.

A coordination server links the robot arm with a user to start a new teleoperation session. Users can queue until the next arm becomes available, as only one active teleoperation session is allowed at a time on a robot arm. The RoboTurk Real Robot Dataset was built from a variety of sensors and data streams. The front-facing cameras provide the same view available to users, top-down depth cameras collect depth information, and sensors on the arm take readings on joint and end effector information, etc.

image.png
image.png
image.png

Users were asked to manipulate the robot arms to solve three tasks. In Object Search, a bin contains different classes of objects and users are asked to search for three objects of the same class and sort them into the corresponding bin on the side. In the Tower Creation task, users need to use different shapes of everyday kitchen items such as bowls and cups to stack a tower as tall as possible. Laundry Layout tasks the user to efficiently unfold and flatten a cloth or other fabric objects. Researchers picked these tasks because they require high-level reasoning and low-level dexterity to complete.

The researchers suggest further improvements can be made through the crowdsourcing platform and suggest the dataset be used for policy learning and applications such as multimodal density estimation, video prediction, reward function learning, and hierarchical task planning.

The Stanford research has inspired some in the AI community to describe RoboTurk as an ImageNet for robotics. Two years in the making, the dataset is at very least a good first step and a great example of what a community can produce. As RoboTurk continues to evolve in size and scope it will undoubtedly benefit the machine learning and robotics researchers and enable further innovations.

The paper Scaling Robot Supervision to Hundreds of Hours with RoboTurk: Robotic Manipulation Dataset through Human Reasoning and Dexterity is on arXiv, and The RoboTurk Real Robot Dataset can be downloaded for free from the project website.


Journalist: Fangyu Cai | Editor: Michael Sarazen

0 comments on “Robot Lovers Rejoice! Fei-Fei Li Stanford Team Crowd-Sources World’s Largest Robot Manipulation Dataset

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: