Learning to detect occluded objects in videos

Satyaki Chakraborty
Master's Thesis, Tech. Report, CMU-RI-TR-19-33, June, 2019

Download Publication

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

Occlusion is one of the most significant challenges encountered by object detectors or trackers. While object detection in videos has received a lot of attention in the past few years, most existing methods in this domain do not target detecting objects when they are occluded. However, being able to detect or track an object of interest through occlusion has been a long-standing challenge for different autonomous tasks. Traditional methods that employ visual object trackers with explicit occlusion modeling experience drift and (or) make several fundamental assumptions about the data. We propose to address this with an end-to-end method that builds upon the success of region-based video object detectors which aims to learn to model occlusion in a data-driven way. Finally, we show that our method is able to achieve superior performance with respect to state-of-the-art video object detectors on a dataset of furniture assembly videos collected from the internet, where small objects like screws, nuts, and bolts often get occluded from the camera viewpoint.


@mastersthesis{Chakraborty-2019-116211,
author = {Satyaki Chakraborty},
title = {Learning to detect occluded objects in videos},
year = {2019},
month = {June},
school = {},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-19-33},
keywords = {object detection, occlusion},
} 2019-06-26T08:00:10-04:00