Learning Latent Variable and Predictive Models of Dynamical Systems

Sajid Siddiqi
doctoral dissertation, tech. report CMU-RI-TR-09-39, Robotics Institute, Carnegie Mellon University, January, 2010


Download
  • Adobe portable document format (pdf) (11MB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
A variety of learning problems in robotics, computer vision and other areas of artificial intelligence can be construed as problems of learning statistical models for dynamical systems from sequential observations. Good dynamical system models allow us to represent and predict observations in these systems, which in turn enables applications such as classification, planning, control, simulation, anomaly detection and forecasting. One class of dynamical system models assumes the existence of an underlying hidden random variable that evolves over time and emits the observations we see. Past observations are summarized into the belief distribution over this random variable, which represents the state of the system. This assumption leads to ‘latent variable models’ which are used heavily in practice. However, learning algorithms for these models still face a variety of issues such as model selection, local optima and instability. The representational ability of these models also differs significantly based on whether the underlying latent variable is assumed to be discrete as in Hidden Markov Models (HMMs), or real-valued as in Linear Dynamical Systems (LDSs). Another recently introduced class of models represents state as a set of predictions about future observations rather than as a latent variable summarizing the past. These ‘predictive models’, such as Predictive State Representations (PSRs), are provably more powerful than latent variable models and hold the promise of allowing more accurate, efficient learning algorithms since no hidden quantities are involved. However, this promise has not been realized. In this thesis we propose novel learning algorithms that address the issues of model selection, local minima and instability in learning latent variable models. We show that certain ’predictive’ latent variable model learning methods bridge the gap between latent variable and predictive models. We also propose a novel latent variable model, the Reduced-Rank HMM (RR-HMM), that combines desirable properties of discrete and real-valued latent-variable models. We show that reparameterizing the class of RR-HMMs yields a subset of PSRs, and propose an asymptotically unbiased predictive learning algorithm for RR-HMMs and PSRs along with finite-sample error bounds for the RR-HMM case. In terms of efficiency and accuracy, our methods outperform alternatives on dynamic texture videos, mobile robot visual sensing data, and other domains.

Notes
Associated Lab(s) / Group(s): Auton Lab
Number of pages: 191

Text Reference
Sajid Siddiqi, "Learning Latent Variable and Predictive Models of Dynamical Systems," doctoral dissertation, tech. report CMU-RI-TR-09-39, Robotics Institute, Carnegie Mellon University, January, 2010

BibTeX Reference
@phdthesis{Siddiqi_2010_6517,
   author = "Sajid Siddiqi",
   title = "Learning Latent Variable and Predictive Models of Dynamical Systems",
   booktitle = "",
   school = "Robotics Institute, Carnegie Mellon University",
   month = "January",
   year = "2010",
   number= "CMU-RI-TR-09-39",
   address= "Pittsburgh, PA",
}