Differentiable physics models for real-world offline model-based reinforcement learning
Lutter, Michael, Silberbauer, Johannes, Watson, Joe, and Peters, Jan

Publication: 2021 IEEE International Conference on Robotics and Automation (ICRA)

Abstract: A limitation of model-based reinforcement learning (MBRL) is the exploitation of errors in the learned models. Black-box models can fit complex dynamics with high fidelity, but their behavior is undefined outside of the data distribution.Physics-based models are better at extrapolating, due to the general validity of their informed structure, but underfit in the real world due to the presence of unmodeled phenomena. In this work, we demonstrate experimentally that for the offline model-based reinforcement learning setting, physics-based models can be beneficial compared to high-capacity function approximators if the mechanical structure is known. Physics-based models can learn to perform the ball in a cup (BiC) task on a physical manipulator using only 4 minutes of sampled data using offline MBRL. We find that black-box models consistently produce unviable policies for BiC as all predicted trajectories diverge to physically impossible state, despite having access to more data than the physics-based model. In addition, we generalize the approach of physics parameter identification from modeling holonomic multi-body systems to systems with nonholonomic dynamics using end-to-end automatic differentiation.

Bibtex:

@inproceedings{lutter-2021-differentiable-learning,
  title = {Differentiable physics models for real-world offline model-based reinforcement learning},
  author = {Lutter, Michael and Silberbauer, Johannes and Watson, Joe and Peters, Jan},
  year = {2021},
  booktitle = {2021 IEEE International Conference on Robotics and Automation (ICRA)},
  pages = {4163--4170},
  organization = {IEEE}
}