On the Chance Accuracies of Large Collections of Classifiers

Mark Palatucci and Andrew Carlson
Proceedings of the 25th International Conference on Machine Learning, July, 2008.


Download
  • Adobe portable document format (pdf) (161KB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
We provide a theoretical analysis of the chance accuracies of large collections of classifiers. We show that on problems with small numbers of examples, some classifier can perform well by random chance, and we derive a theorem to explicitly calculate this accuracy.

We use this theorem to provide a principled feature selection criterion for sparse, high-dimensional problems. We evaluate this method on microarray and fMRI datasets and show that it performs very close to the optimal accuracy obtained from an oracle. We also show that on the fMRI dataset this technique chooses relevant features successfully while another state-of-the-art method, the False Discovery Rate (FDR), completely fails at standard significance levels.


Keywords
order statistics, extreme value, feature selection, multiple hypothesis testing

Notes

Text Reference
Mark Palatucci and Andrew Carlson, "On the Chance Accuracies of Large Collections of Classifiers," Proceedings of the 25th International Conference on Machine Learning, July, 2008.

BibTeX Reference
@inproceedings{Palatucci_2008_6049,
   author = "Mark Palatucci and Andrew Carlson",
   title = "On the Chance Accuracies of Large Collections of Classifiers",
   booktitle = "Proceedings of the 25th International Conference on Machine Learning",
   month = "July",
   year = "2008",
}