Robust Kernel Principal Component Analysis

Minh Hoai Nguyen and Fernando De la Torre Frade
Advances in Neural Information Processing Systems, December, 2008.


Download
  • Adobe portable document format (pdf) (260KB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
Kernel Principal Component Analysis (KPCA) is a popular generalization of linear PCA that allows non-linear feature extraction. In KPCA, data in the input space is mapped to higher (usually) dimensional feature space where the data can be linearly modeled. The feature space is typically induced implicitly by a kernel function, and linear PCA in the feature space is performed via the kernel trick. However, due to the implicitness of the feature space, some extensions of PCA such as robust PCA cannot be directly generalized to KPCA. This paper presents a technique to overcome this problem, and extends it to a unified framework for treating noise, missing data, and outliers in KPCA. Our method is based on a novel cost function to perform inference in KPCA. Extensive experiments, in both synthetic and real data, show that our algorithm outperforms existing methods.

Notes
Number of pages: 8

Text Reference
Minh Hoai Nguyen and Fernando De la Torre Frade, "Robust Kernel Principal Component Analysis," Advances in Neural Information Processing Systems, December, 2008.

BibTeX Reference
@inproceedings{Nguyen_2008_6287,
   author = "Minh Hoai Nguyen and Fernando {De la Torre Frade}",
   title = "Robust Kernel Principal Component Analysis",
   booktitle = "Advances in Neural Information Processing Systems",
   month = "December",
   year = "2008",
}