ANYmal does not have an easy life. One of the four-legged robot’s main tasks is to learn how to stand up again — no matter how many times it is kicked, pushed or otherwise tumbles to the ground. A research team from Switzerland’s ETH Zurich University trained ANYmal using reinforcement learning (RL) and published their work last Wednesday.
RL is typically limited to applications in simulations, and it is rare to see it used on real robot systems. ANYmal is a trailblazer in this respect, and has garnered an enormous amount of attention from the AI community.
OOPS I FELL OVER
Researchers designed ANYmal as a quadrupedal robot for autonomous operation in challenging environments. Last year ANYmal was dispatched on its first missions, inspecting Zurich’s vast sewage system and visiting one of the world’s largest offshore converter platforms in the North Sea, where it scrutinized various platform areas inaccessible to humans. The 30kg dog-like robot’s delivery demo and dance moves got a very positive response at the recent CES 2019 in Las Vegas.
Designing legged robots is a challenge in part because humans simply haven’t yet figured out how to accurately craft realistic animal movements. The research team chose to use reinforcement learning to boost ANYmal’s performance, as RL requires little craftsmanship.
Training with real robots is however difficult and expensive, especially when dealing with dynamic balancing systems. ETH researchers elected to train a neural network in simulation and then transfer it to ANYmal: “Simulation is fast, cheap, and safe. Our simulation platform can simulate more than 2,000 ANYmals in real time on a normal desktop machine. In simulation, data is cheap and abundant.” The training focused on increasing ANYmal’s running speed and its skill and speed in recovering from falls.
RL-powered Fall recovery
ETH researchers say the biggest drawback of simulations is they cannot accurately capture the dynamics of complex robots, and as such training only in simulation “usually fails.” The team decided a neural network system could help the robot learn and adapt. They divided robot dynamics into three main parts: Control PC, Actuator, and Mechanics, and trained a neural network representing these complex dynamics with data from the real robot.
Using RL in simulation enabled the controllers to effectively learn directly from experiences, overcoming the limitations of previous model-based approaches. The researchers explained the process is “fully automated and can optimize the controller end to end, from sensor readings to low-level control signals, thereby allowing for highly agile and efficient controllers.” Training via simulation also lowered costs and scaled up development in real time.
ANYmal is not the only dog-like, AI-powered robot on the scene. Boston Dynamics’ “SpotMini” can also dance to music and recompose itself after a fall. But unlike SpotMini, ANYmal learned how to stand up again all by itself, eliminating human experts, experience and interventions. ANYmal’s superior ability to think on its feet could give it an edge when carrying out challenging solo missions such as rescue, visual inspection, thermal inspection, gas detection, etc.
The Robotic Systems Lab at ETH Zurich University has been working on the ANYmal project for years. Their pup owes its latest leap forward to recent improvements in cameras and sensors. Because ANYmal is designed for dangerous tasks in demanding environments, researchers used cutting-edge cameras and LiDAR to enable functionality for example across low-light conditions.
The ETH research team has high hopes for ANYmal: “Legged robots may one day rescue people in forests and mountain, climb stars to carry payloads in construction sites, inspect unstructured underground tunnels, and explore other planets.”
The paper Learning agile and dynamic motor skills for legged robots was published in Science Magazine.
Journalist: Fangyu Cai | Editor: Michael Sarazen
0 comments on “You Can’t Keep an RL-Powered ANYmal Down”