Aug 07, 2018
Webinar: Robust control strategies for musculoskeletal models using deep reinforcement learning
An introduction to reinforcement learning and its application to developing control strategies
DID YOU MISS THIS EVENT?
A recording of the event is available for viewing and a copy of the slides are available for downloadDetails
Title: Robust control strategies for musculoskeletal models using deep reinforcement learning Speaker: Lukasz Kidzinski, Stanford University Time: Tuesday, August 7th, 2018 at 10:00 a.m. Pacific Daylight TimeAbstract
Predicting how the human motor control system adapts to new conditions during gait is a grand challenge in biomechanics. Computational models that emulate human motor control could assist in many applications, such as improving surgical planning for gait pathologies and designing devices to restore mobility for lower-limb amputees. Deep reinforcement learning is a promising approach for modeling motor control and its adaptation to new conditions, but it has not been widely explored in biomechanics research. In this webinar, we will provide an introduction to reinforcement learning and highlight its use for biomechanical applications.Traditional, physics-based, biomechanical simulations track experimental data, such as joint kinematics and ground reaction forces (GRFs), which limits these studies from investigating how kinematics and GRFs would adapt to a new control strategy. Although generating simulation de novo currently is difficult due to the large optimization space, which typically requires movement-specific controllers, recent developments in machine learning have been shown to search large spaces efficiently. One such technique, reinforcement learning, is an unsupervised machine learning approach that seeks to take actions to maximize some user-defined performance metric, or reward, thus creating a complex controller for any movement without specific domain knowledge.
We have developed osim-rl, an OpenSim-based platform that enables anyone to easily develop and test new reinforcement-learning-based control strategies with a physiologically accurate musculoskeletal model. In the webinar, we will introduce this platform, which is being used as part of a challenge to develop a controller to enable a model with a prosthetic leg to walk in specifically requested directions and speeds. We encourage the biomechanics community to participate and will discuss how to get started. For details about the Conference on Neural Information Processing Systems (NIPS) 2018 challenge, visit https://www.crowdai.org/challenges/nips-2018-ai-for-prosthetics-challenge.