There has been a surge of research interest in deep reinforcement learning (DRL), encouraged by its widely acknowledged success in applications such as game playing and robotic control. New advancements in DRL have also laid a foundation for the modelling of complex human motor control processes and the prediction and control of a range of human movements.
In the new paper Deep Reinforcement Learning for Modeling Human Locomotion Control in Neuromechanical Simulation, researchers from Stanford University, UC Berkeley and CMU review neuromechanical simulations and DRL, with a focus on modelling the control of human locomotion. Many biomechanics and motor control researchers have studied motor control models using neuromechanical simulations, which enable physically correct motions in a musculoskeletal model for the purpose of analyzing the observed human motions. However, the team notes that DRL has rarely been applied in neuromechanical simulations to model human locomotion control, suggesting this has hindered the development of accurate motion prediction models.
The researchers note that current neuromechanical control models were mostly created based on structural and functional control hypotheses observed in and shared by many animals, and are limited to modelling lower layer (spinal cord) control and generating steady locomotion behaviours. In such human locomotion control models, lower layer control generates basic motor patterns while a higher layer (supraspinal system) passes on commands to the lower layer to modulate the underlying patterns.
As an alternative to building control models that obtain specific physiological features and using simulations to evaluate, the researchers propose adopting DRL for training artificial neural networks in neuromechanical simulations. “Deep RL can be thought of as training a black-box controller that produces motions of interest,” reads the paper, “despite the discrepancy between artificial and biological neural networks, such means of developing versatile controllers could be useful in investigating human motor control.” Recent advancements in DRL have made it possible to develop controllers with high-dimensional inputs that produce outputs applicable to human musculoskeletal models.
For the past three years, this research team has been organizing a Learn to Move competition series at leading machine learning conference NeurIPS. In last year’s challenge, Learn to Move – Walk Around, the top performers successfully adapted SOTA DRL techniques to control a 3D human musculoskeletal model. This was the first time that these locomotion behaviours were demonstrated in neuromechanical simulations without using reference motion data. The researchers believe DRL can offer additional unique insights to aid in developing control models that generate realistic and complex motions such as quick turning and walk-to-stand transitions.
The team plans to continue hosting its Learn to Move competition and hopes to attract and encourage interdisciplinary studies and collaborations in the field of modelling human motor control for biomechanics and rehabilitation.
The paper Modeling Human Locomotion Control in Neuromechanical Simulation is on bioRxiv.
Reporter: Fangyu Cai | Editor: Michael Sarazen
Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors
This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle. Along with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.
Click here to find more reports from us.
We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.
Pingback: Using Deep RL to Model Human Locomotion Control in Neuromechanical Simulations – Paper TL