The Berkeley Artificial Intelligence Research Lab’s DeepMimic presentation created a buzz this summer at the prestigious computer graphic conference SIGGRAPH 2018 in Vancouver. BAIR Researchers adopted a Reinforcement Learning (RL) technique that enables simulated humanoid characters to accurately and convincingly reproduce dynamic and acrobatic physical movements learned from motion capture data from human subjects.
The paper’s first author, Berkeley PhD student Xue Bin Peng, has now open-sourced the project’s codes, data, and frameworks. Moreover, Peng’s new research demonstrates that DeepMimic’s simulated characters can also learn to perform highly dynamic movements by using regular video clips of human examples as input data. This will greatly simplify the training process as videos are much more readily available than motion capture data.
Because the trained computation model can realistically interpret the physics of human beings and creatures in motion, the DeepMimic technique has potential applications in animation production, where it could be used to automatically implement such realism to human, animal and fantasy characters.
There are also potential applications beyond the field of visual demonstration. Researchers could for example train characters on how to respond to different environmental changes, a technique that could then be used to train real world robots to perform complex movements using simulation.
Reinforcement learning is increasingly being used to solve robotic tasks such as motion control problems, where the reward function can enable machines to acquire effective skills via self-learning. However, deep reinforcement learning methods can also result in unusual behaviors such as jittering, asymmetric gaits, or excessive movement of limbs.
Synced previewed the DeepMimic research in April, when the BAIR team introduced a method called Reference State Initialization. RSI acts as a sort of advisor, telling the character which states in a movement are likely to result in high rewards when properly performed. The method speeds up training and significantly improves motion regeneration results.
The paper DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills has opened up exciting possibilities for improving visual demonstration with highly dynamic character skills.
The DeepMimic open-sourced codes, data, and frameworks are available on GitHub.
Journalist: Fangyu Cai | Editor: Michael Sarazen