Fast State Discovery for HMM Model Selection and Learning - Robotics Institute Carnegie Mellon University

Fast State Discovery for HMM Model Selection and Learning

Sajid Siddiqi, Geoffrey Gordon, and Andrew Moore
Conference Paper, Proceedings of 11th International Conference on Artificial Intelligence and Statistics (AISTATS '07), pp. 492 - 499, March, 2007

Abstract

Choosing the number of hidden states and their topology (model selection) and estimating model parameters (learning) are important problems for Hidden Markov Models. This paper presents a new state-splitting algorithm that addresses both these problems. The algorithm models more information about the dynamic context of a state during a split, enabling it to discover underlying states more effectively. Compared to previous top-down methods, the algorithm also touches a smaller fraction of the data per split, leading to faster model search and selection. Because of its efficiency and ability to avoid local minima, the state-splitting approach is a good way to learn HMMs even if the desired number of states is known beforehand. We compare our approach to previous work on synthetic data as well as several real-world data sets from the literature, revealing significant improvements in efficiency and test-set likelihoods. We also compare to previous algorithms on a sign-language recognition task, with positive results.

BibTeX

@conference{Siddiqi-2007-17033,
author = {Sajid Siddiqi and Geoffrey Gordon and Andrew Moore},
title = {Fast State Discovery for HMM Model Selection and Learning},
booktitle = {Proceedings of 11th International Conference on Artificial Intelligence and Statistics (AISTATS '07)},
year = {2007},
month = {March},
pages = {492 - 499},
}