Aesthetic Image Classification for Autonomous Agents

Mark Desnoyer and David Wettergreen
August, 2010, pp. 3452 - 3455.


Download
  • Adobe portable document format (pdf) (976KB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
Computational aesthetics is the study of applying machine learning techniques to identify aesthetically pleasing imagery. Prior work used online datasets scraped from large user communities like Flikr to get labeled data. However, online imagery represents results late in the media generation process, as the photographer has already framed the shot and then picked the best results to upload. Thus, this technique can only identify quality imagery once it has been taken. In contrast, automatically creating pleasing imagery requires understanding the imagery present earlier in the process. This paper applies computational aesthetics techniques to a novel dataset from earlier in that process in order to understand how the problem changes when an autonomous agent, like a robot or a real-time camera aid, creates pleasing imagery instead of simply identifying it.

Keywords
aesthetic, classification, computational aesthetics, vision

Notes
Associated Center(s) / Consortia: Field Robotics Center

Text Reference
Mark Desnoyer and David Wettergreen, "Aesthetic Image Classification for Autonomous Agents," August, 2010, pp. 3452 - 3455.

BibTeX Reference
@inproceedings{Desnoyer_2010_6807,
   author = "Mark Desnoyer and David Wettergreen",
   title = "Aesthetic Image Classification for Autonomous Agents",
   booktitle = "",
   pages = "3452 - 3455",
   month = "August",
   year = "2010",
}