Modern robotics has brought to life intelligent behavior through powerful planning tools that enjoy increasing success every year. These tools have given us machines that successfully manipulate objects, navigate rugged terrain, drive in urban settings, and play chess better than any human on this earth. Unfortunately, machine learning did not developed alongside robotics and has therefore traditionally focused on problems such as classification, regression, statistical estimation, clustering, manifold learning, and deep learning. Each of these broad learning areas arose with one or more applications in mind, but those applications were rarely decision making. One area of machine learning that was developed specifically for discovering good decision policies is reinforcement learning. It is well known, though, that this problem is fundamentally hard. Reinforcement learning ignores both important sources of information such as demonstration, and it ignores important advances in modern planning technology. Within robotics, we most frequently find machine learning applied as a collection of pre-made tools positioned to train system subcomponents in hopes of positively affecting the robot's behavior.
I aim to move beyond applied machine learning for robotics and to simultaneously develop both new learning algorithms and new planners that are tailored to coexist. My work characterizes planners that are trainable from trajectory demonstrations, develops learning algorithms to train those planners, and designs novel trainable planners that apply to a wide range of problems.