Self-Challenging Improves Cross-Domain Generalization - Robotics Institute Carnegie Mellon University

Self-Challenging Improves Cross-Domain Generalization

Zeyi Huang, Haohan Wang, Eric P. Xing, and Dong Huang
Conference Paper, Proceedings of (ECCV) European Conference on Computer Vision, August, 2020

Abstract

Convolutional Neural Networks (CNN) conduct image classification by activating dominant features that correlated with labels. When the training and testing data are under similar distributions, their dominant features are similar, leading to decent test performance. The performance is nonetheless unmet when tested with different distributions, leading to the challenges in cross-domain image classification. We introduce a simple training heuristic, Representation Self-Challenging (RSC), that significantly improves the generalization of CNN to the out-of-domain data. RSC iteratively challenges (discards) the dominant features activated on the training data, and forces the network to activate remaining features that correlate with labels. This process appears to activate feature representations applicable to out-of-domain data without prior knowledge of the new domain and without learning extra network parameters. We present the theoretical properties and conditions of RSC for improving cross-domain generalization. The experiments endorse the simple, effective, and architecture-agnostic nature of our RSC method.

BibTeX

@conference{Huang-2020-123146,
author = {Zeyi Huang and Haohan Wang and Eric P. Xing and Dong Huang},
title = {Self-Challenging Improves Cross-Domain Generalization},
booktitle = {Proceedings of (ECCV) European Conference on Computer Vision},
year = {2020},
month = {August},
keywords = {cross-domain generalization, robustness},
}