Time-Mapping Using Space-Time Saliency

Feng Zhou, Sing Bing Kang, and Michael F. Cohen
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July, 2014.


Download
  • Adobe portable document format (pdf) (7MB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
We describe a new approach for generating regularspeed, low-frame-rate (LFR) video from a high-frame-rate (HFR) input while preserving the important moments in the original. We call this time-mapping, a time-based analogy to high dynamic range to low dynamic range spatial tone-mapping. Our approach makes these contributions: (1) a robust space-time saliency method for evaluating visual importance, (2) a re-timing technique to temporally resample based on frame importance, and (3) temporal filters to enhance the rendering of salient motion. Results of our space-time saliency method on a benchmark dataset show it is state-of-the-art. In addition, the benefits of our approach to HFR-to-LFR time-mapping over more direct methods are demonstrated in a user study.

Notes

Text Reference
Feng Zhou, Sing Bing Kang, and Michael F. Cohen, "Time-Mapping Using Space-Time Saliency," IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July, 2014.

BibTeX Reference
@inproceedings{Zhou_2014_7569,
   author = "Feng Zhou and Sing Bing Kang and Michael F. Cohen",
   title = "Time-Mapping Using Space-Time Saliency",
   booktitle = "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
   month = "July",
   year = "2014",
}