Physics-Based Rendering for Improving Robustness to Rain - Robotics Institute Carnegie Mellon University

Physics-Based Rendering for Improving Robustness to Rain

Shirsendu Sukanta Halder, Jean-Francois Lalonde, and Raoul de Charette
Conference Paper, Proceedings of (ICCV) International Conference on Computer Vision, pp. 10202 - 10211, October, 2019

Abstract

To improve the robustness to rain, we present a physically-based rain rendering pipeline for realistically inserting rain into clear weather images. Our rendering relies on a physical particle simulator, an estimation of the scene lighting and an accurate rain photometric modeling to augment images with arbitrary amount of realistic rain or fog. We validate our rendering with a user study, proving our rain is judged 40% more realistic that state-of-the-art. Using our generated weather augmented Kitti and Cityscapes dataset, we conduct a thorough evaluation of deep object detection and semantic segmentation algorithms and show that their performance decreases in degraded weather, on the order of 15% for object detection and 60% for semantic segmentation. Furthermore, we show refining existing networks with our augmented images improves the robustness of both object detection and semantic segmentation algorithms. We experiment on nuScenes and measure an improvement of 15% for object detection and 35% for semantic segmentation compared to original rainy performance. Augmented databases and code are available on the project page.

BibTeX

@conference{Halder-2019-126862,
author = {Shirsendu Sukanta Halder and Jean-Francois Lalonde and Raoul de Charette},
title = {Physics-Based Rendering for Improving Robustness to Rain},
booktitle = {Proceedings of (ICCV) International Conference on Computer Vision},
year = {2019},
month = {October},
pages = {10202 - 10211},
}