AI Emerging Company United States

Embodying Robots With Intelligence

Engineers have long struggled to teach robots how to perform actions that appear simple by human standards. Former OpenAI research scientist Peter Chen believes the solution is transforming such robots into teachable apprentices.

Meshing a couple of cogwheels looks like child’s play — they just seem to naturally fit together. However, monitoring gears’ angular differences and optimizing their alignment and rotation is an engineering problem that can stump an industrial robot. Engineers have long struggled to teach robots how to perform actions that appear simple by human standards. Former OpenAI research scientist Peter Chen believes the solution is transforming such robots into teachable apprentices.

This September, Chen and his mentor Pieter Abbeel — one of the top minds in robotic technology and artificial intelligence — launched Embodied Intelligence, a company dedicated to developing robots that can easily learn new skills without sophisticated programming. Rocky Duan from OpenAI and Tianhao Zhang from Microsoft are also co-founders.

WechatIMG71
(clockwise from top left) Embodied Intelligence co-founders Pieter Abbeel, Peter Chen, Tianhao Zhang, and Rocky Duan.

“More and more young people are reluctant to take on boring and repetitive tasks in factories or warehouses, while current industrial robots cannot adapt to flexible industrial manufacturing that allows the system to react in case of changes,” says Chen.

Chen’s approach involves three stages: teaching robots, reducing the cost of teaching robots, and enabling robots to learn by themselves.

The first stage was completed two years ago at the Berkeley Artificial Intelligence Research (BAIR) laboratory, the top robotics lab where Chen worked as a PhD student. The lab successfully enabled robots to use their vision sensors to recreate actions they had observed. Deep neural networks proved to be a good computational architecture for this.

However, the research stalled in the theoretical stage. The process of training robots requires pre-programming and debugging by humans who have deep learning knowledge. A university lab can ask a few doctoral students to work day and night, but this method is neither time nor cost effective, and certainly not applicable in industry.

This year, Chen’s team implemented imitation learning — a method for robots to learn specific movements from human demonstrations.

In Chen’s Emeryville, California office, a colleague dons a virtual reality headset and performs a series of hand movements which the robot tracks in real time. This data is fed into a deep neural network and the robot replicates the movements until it concludes that it has learned to perform them without human assistance. The entire process takes just 30 minutes.

“Even multiplying this 30-minutes by 100 times is far less than the usual cost of training robots,” says Chen.

This method can be applied to different movements without changing the codes used in VR, demonstration collection, training, or the neural network. Only the demonstrations are different.

Humans’ movements are not always optimal, and robots can discover new and more efficient ways of performing their tasks through a trial-and-error process enabled by reinforcement learning. Chen believes an integrated solution of imitation learning and reinforcement learning will be available to industries as early as next year.

“Recent breakthroughs in AI have enabled robots to learn locomotion, develop manipulation skills from trial and error, and to learn from VR demonstrations. However, all of these advances have been in simulation or laboratory environments,” says Sunil Dhaliwal, General Partner at Embodied Intelligence’s principal investor Amplify Partners. “The Embodied Intelligence team that led much of this work will now bring these cutting-edge AI and robotics advances into the real world.”

Teaching robots is not Chen’s ultimate goal. Meta Learning, a machine learning approach that uses self-learning instead of expert data, is what the company plans to pursue over the next five to ten years. Meta learning is widely regarded as a possible pathway to artificial general intelligence (AGI), the long-range, human-intelligence-level target of contemporary AI technology.

“At this stage, we are only trying to reduce teaching time. Ultimately, we want to make robots as humanlike as possible,” says Chen.

Chen has had encouraging research results with meta learning. This year, he and Abbeel published two essays on arXiv. Meta Learning Shared Hierarchies explores an approach for learning hierarchical strategies by using shared primitives to improve sample efficiency without tasks; while Meta-Learning with Temporal Convolutions proposes a class of simple and generic meta-learner architectures, based on temporal convolutions, that is domain-agnostic and has no particular strategy or algorithm encoded into it.

Says Chen, “Our envisioned smart robot will have both meta learning and reinforcement learning. Reinforcement learning performs well on a single mission, while meta learning allows robots to learn more quickly.”

The race to develop smart robots is highly competitive. This year, Google introduced a self-supervision imitation method that teaches robots simple skills through human demonstration videos. Startups are also in the game — in Union City, a 20-minute drive from Emeryville, Vicarious.ai is developing smart robots by simulating the human visual cortex in conjunction with generative models.

No one can predict how or when we might achieve AGI in smart robots, or who will get there first. But Embodied Intelligence is definitely a standout competitor in the race.


Journalist: Tony Peng | Editor: Michael Sarazen

0 comments on “Embodying Robots With Intelligence

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: