Seeing what a GAN Cannot Generate - Robotics Institute Carnegie Mellon University

Seeing what a GAN Cannot Generate

David Bau, Jun-Yan Zhu, Jonas Wulff, William Peebles, Hendrik Strobelt, Bolei Zhou, and Antonio Torralba
Conference Paper, Proceedings of (ICCV) International Conference on Computer Vision, pp. 4501 - 4510, October, 2019

Abstract

Despite the success of Generative Adversarial Networks (GANs), mode collapse remains a serious issue during GAN training. To date, little work has focused on understanding and quantifying which modes have been dropped by a model. In this work, we visualize mode collapse at both the distribution level and the instance level. First, we deploy a semantic segmentation network to compare the distribution of segmented objects in the generated images with the target distribution in the training set. Differences in statistics reveal object classes that are omitted by a GAN. Second, given the identified omitted object classes, we visualize the GAN's omissions directly. In particular, we compare specific differences between individual photos and their approximate inversions by a GAN. To this end, we relax the problem of inversion and solve the tractable problem of inverting a GAN layer instead of the entire generator. Finally, we use this framework to analyze several recent GANs trained on multiple datasets and identify their typical failure cases.

BibTeX

@conference{Bau-2019-125675,
author = {David Bau and Jun-Yan Zhu and Jonas Wulff and William Peebles and Hendrik Strobelt and Bolei Zhou and Antonio Torralba},
title = {Seeing what a GAN Cannot Generate},
booktitle = {Proceedings of (ICCV) International Conference on Computer Vision},
year = {2019},
month = {October},
pages = {4501 - 4510},
}