Efficient K-Shot Learning with Regularized Deep Networks - Robotics Institute Carnegie Mellon University

Efficient K-Shot Learning with Regularized Deep Networks

Donghyun Yoo, Haoqi Fan, Vishnu Bodetti, and Kris M. Kitani
Conference Paper, Proceedings of 32nd AAAI Conference on Artificial Intelligence (AAAI '18), pp. 4382 - 4389, February, 2018

Abstract

Feature representations from pre-trained deep neural networks have been known to exhibit excellent generalization and utility across a variety of related tasks. Fine-tuning is by far the simplest and most widely used approach that seeks to exploit and adapt these feature representations to novel tasks with limited data. Despite the effectiveness of fine-tuning, it is often sub-optimal and requires very careful optimization to prevent severe over-fitting to small datasets. The problem of sub-optimality and over-fitting, is due in part to the large number of parameters (≈ 10) used in a typical deep convolutional neural network. To address these problems, we propose a simple yet effective regularization method for fine-tuning pre-trained deep networks for the task of k-shot learning. To prevent overfitting, our key strategy is to cluster the model parameters while ensuring intra-cluster similarity and intercluster diversity of the parameters, effectively regularizing the dimensionality of the parameter search space. In particular, we identify groups of neurons within each layer of a deep network that share similar activation patterns. When the network is to be fine-tuned for a classification task using only k examples, we propagate a single gradient to all of the neuron parameters that belong to the same group. The grouping of neurons is non-trivial as neuron activations depend on the distribution of the input data. To efficiently search for optimal groupings conditioned on the input data, we propose a reinforcement learning search strategy using recurrent networks to learn the optimal group assignments for each network layer. Experimental results show that our method can be easily applied to several popular convolutional neural networks and improve upon other state-of-the-art fine-tuning based k-shot learning strategies by more than 10% of accuracy.

BibTeX

@conference{Yoo-2018-109789,
author = {Donghyun Yoo and Haoqi Fan and Vishnu Bodetti and Kris M. Kitani},
title = {Efficient K-Shot Learning with Regularized Deep Networks},
booktitle = {Proceedings of 32nd AAAI Conference on Artificial Intelligence (AAAI '18)},
year = {2018},
month = {February},
pages = {4382 - 4389},
}