Quadtree Generating Networks: Efficient Hierarchical Scene Parsing with Sparse Convolutions - Robotics Institute Carnegie Mellon University

Quadtree Generating Networks: Efficient Hierarchical Scene Parsing with Sparse Convolutions

Kashyap Chitta, Jose M. Alvarez, and Martial Hebert
Conference Paper, Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV '20), pp. 2009 - 2018, March, 2020

Abstract

Semantic segmentation with Convolutional Neural Networks is a memory-intensive task due to the high spatial resolution of feature maps and output predictions. In this paper, we present Quadtree Generating Networks (QGNs), a novel approach able to drastically reduce the memory footprint of modern semantic segmentation networks. The key idea is to use quadtrees to represent the predictions and target segmentation masks instead of dense pixel grids. Our quadtree representation enables hierarchical processing of an input image, with the most computationally demanding layers only being used at regions in the image containing boundaries between classes. In addition, given a trained model, our representation enables flexible inference schemes to trade-off accuracy and computational cost, allowing the network to adapt in constrained situations such as embedded devices. We demonstrate the benefits of our approach on the Cityscapes, SUN-RGBD and ADE20k datasets. On Cityscapes, we obtain an relative 3% mIoU improvement compared to a dilated network with similar memory consumption; and only receive a 3% relative mIoU drop compared to a large dilated network, while reducing memory consumption by over 4×.

BibTeX

@conference{Chitta-2020-117784,
author = {Kashyap Chitta and Jose M. Alvarez and Martial Hebert},
title = {Quadtree Generating Networks: Efficient Hierarchical Scene Parsing with Sparse Convolutions},
booktitle = {Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV '20)},
year = {2020},
month = {March},
pages = {2009 - 2018},
}