Maximum Entropy for Collaborative Filtering

Charles Zitnick and Takeo Kanade
Proc. Twentieth Conference of Uncertainty in Artificial Intelligence, 2004, pp. 636 - 643.


Download
  • Adobe portable document format (pdf) (208KB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
Within the task of collaborative filtering two challenges for computing conditional probabilities exist. First, the amount of training data available is typically sparse with respectto the size of the domain. Thus, support for higher-order interactions is generally not present. Second, the variables that we are conditioning upon vary for each query. That is, users label different variables during each query. For this reason, there is no consistent input to output mapping. To address these problems we purpose a maximum entropy approach using a non-standard measure of entropy. This approach can be simplified to solving a set of linear equations that can be efficiently solved.

Notes
Associated Center(s) / Consortia: Vision and Autonomous Systems Center
Number of pages: 8

Text Reference
Charles Zitnick and Takeo Kanade, "Maximum Entropy for Collaborative Filtering," Proc. Twentieth Conference of Uncertainty in Artificial Intelligence, 2004, pp. 636 - 643.

BibTeX Reference
@inproceedings{Zitnick_2004_4872,
   author = "Charles Zitnick and Takeo Kanade",
   title = "Maximum Entropy for Collaborative Filtering",
   booktitle = "Proc. Twentieth Conference of Uncertainty in Artificial Intelligence",
   pages = "636 - 643",
   year = "2004",
}