/On Two Methods for Semi-Supervised Structured Prediction

On Two Methods for Semi-Supervised Structured Prediction

Daniel Munoz, J. Andrew (Drew) Bagnell and Martial Hebert
Tech. Report, CMU-RI-TR-10-02, Robotics Institute, Carnegie Mellon University, January, 2010

Download Publication (PDF)

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.


Obtaining labeled data for training classifiers is an expensive task that must be done in any new application. It is yet even more expensive for structured models, such as Conditional Random Fields, where some notion of coherence in the labeled data must be maintained. To address this issue, semi-supervised methods are often applied to reduce the needed number of labeled examples to train adequate models. Previous work in this area have resulted in complex training procedures that do not scale to handle large amounts of examples, features, labels, and interactions that are necessary for vision tasks. In this paper, we present and analyze two novel approaches for semi-supervised training of structured models that can satisfy the above requirements. While we unfortunately do not observe significant benefit from using unlabeled data in our real-world experiments, the simple algorithms we present here may be useful in other applications where the necessary assumptions are satisfied.

BibTeX Reference
author = {Daniel Munoz and J. Andrew (Drew) Bagnell and Martial Hebert},
title = {On Two Methods for Semi-Supervised Structured Prediction},
year = {2010},
month = {January},
institution = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-10-02},