First-Person Activity Forecasting from Video with Online Inverse Reinforcement Learning - Robotics Institute Carnegie Mellon University

First-Person Activity Forecasting from Video with Online Inverse Reinforcement Learning

Journal Article, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 42, No. 2, pp. 304 - 317, February, 2020

Abstract

We address the problem of incrementally modeling and forecasting long-term goals of a first-person camera wearer: what the user will do, where they will go, and what goal they seek. In contrast to prior work in trajectory forecasting, our algorithm, Darko, goes further to reason about semantic states (will I pick up an object?), and future goal states that are far in terms of both space and time. Darko learns and forecasts from first-person visual observations of the user's daily behaviors via an Online Inverse Reinforcement Learning (IRL) approach. Classical IRL discovers only the rewards in a batch setting, whereas Darko discovers the transitions, rewards, and goals of a user from streaming data. Among other results, we show Darko forecasts goals better than competing methods in both noisy and ideal settings, and our approach is theoretically and empirically
no-regret.

BibTeX

@article{Rhinehart-2020-109849,
author = {Nicholas Rhinehart and Kris M. Kitani},
title = {First-Person Activity Forecasting from Video with Online Inverse Reinforcement Learning},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
year = {2020},
month = {February},
volume = {42},
number = {2},
pages = {304 - 317},
}