Image Captioning with Compositional Neural Module Networks - Robotics Institute Carnegie Mellon University

Image Captioning with Compositional Neural Module Networks

Conference Paper, Proceedings of 28th International Joint Conference on Artificial Intelligence (IJCAI '19), pp. 3576 - 3584, August, 2019

Abstract

In image captioning where fluency is an important factor in evaluation, e.g., n-gram metrics, sequential models are commonly used; however, sequential models generally result in overgeneralized expressions that lack the details that may be present in an input image. Inspired by the idea of the compositional neural module networks in the visual question answering task, we introduce a hierarchical framework for image captioning that explores both compositionality and sequentiality of natural language. Our algorithm learns to compose an detail-rich sentence by selectively attending to different modules corresponding to unique aspects of each object detected in an input image to include specific descriptions such as counts and color. In a set of experiments on the MSCOCO dataset, the proposed model outperforms a state-of-the art model across multiple evaluation metrics, more importantly, presenting visually interpretable results. Furthermore, the breakdown of subcategories f - scores of the SPICE metric and human evaluation on Amazon Mechanical Turk show that our compositional module networks effectively generate accurate and detailed captions.

BibTeX

@conference{Tian-2019-117060,
author = {Junjiao Tian and Jean Oh},
title = {Image Captioning with Compositional Neural Module Networks},
booktitle = {Proceedings of 28th International Joint Conference on Artificial Intelligence (IJCAI '19)},
year = {2019},
month = {August},
pages = {3576 - 3584},
keywords = {image captioning, neural module networks, compositionality, sequentiality},
}