Rough Terrain Navigation Using Divergence Constrained Model-Based Reinforcement Learning - Robotics Institute Carnegie Mellon University

Rough Terrain Navigation Using Divergence Constrained Model-Based Reinforcement Learning

Wang SJ, Triest S, Wang W, Scherer S, and Johnson A
Conference Paper, Proceedings of the 5th Conference on Robot Learning, Vol. 164, pp. 224-233, 2022

Abstract

Autonomous navigation of wheeled robots in rough terrain environments has been a long standing challenge. In these environments, predicting the robot’s trajectory can be challenging due to the complexity of terrain interactions, as well as the divergent dynamics that cause model uncertainty to compound and propagate poorly. This inhibits the robot’s long horizon decision making capabilities and often lead to shortsighted navigation strategies. We propose a model-based reinforcement learning algorithm for rough terrain traversal that trains a probabilistic dynamics model to consider the propagating effects of uncertainty. During trajectory predictions, a trajectory tracking controller is considered to predict closed-loop trajectories. Our method further increases prediction accuracy and precision by using constrained optimization to find trajectories with low divergence. Using this method, wheeled robots can find non-myopic control strategies to reach destinations with higher probability of success. We show results on simulated and real world robots navigating through rough terrain environments.

BibTeX

@conference{Wang-2022-139815,
author = {Wang SJ, Triest S, Wang W, Scherer S, Johnson A},
title = {Rough Terrain Navigation Using Divergence Constrained Model-Based Reinforcement Learning},
booktitle = {Proceedings of the 5th Conference on Robot Learning},
year = {2022},
month = {January},
editor = {Faust A,Hsu D,Neumann G},
volume = {164},
series = {Proceedings of Machine Learning Research},
pages = {224-233},
publisher = {PMLR},
}