Robust Multi-View Representation Learning - Robotics Institute Carnegie Mellon University

Robust Multi-View Representation Learning

Conference Paper, Proceedings of 34th AAAI Conference on Artificial Intelligence (AAAI '20) (Student Abstract Track), pp. 13939 - 13940, February, 2020

Abstract

Multi-view data has become ubiquitous, especially with multi-sensor systems like self-driving cars or medical patientside monitors. We propose two methods to approach robust multi-view representation learning with the aim of leveraging local relationships between views. The first is an extension of Canonical Correlation Analysis (CCA) where we consider multiple one-vs-rest CCA problems, one for each view. We use a group-sparsity penalty to encourage finding local relationships. The second method is a straightforward extension of a multi-view AutoEncoder with view-level drop-out. We demonstrate the effectiveness of these methods in simple synthetic experiments. We also describe heuristics and extensions to improve and/or expand on these methods.

BibTeX

@conference{Venkatesan-2020-121786,
author = {Sibi Venkatesan and James K. Miller and Artur Dubrawski},
title = {Robust Multi-View Representation Learning},
booktitle = {Proceedings of 34th AAAI Conference on Artificial Intelligence (AAAI '20) (Student Abstract Track)},
year = {2020},
month = {February},
pages = {13939 - 13940},
}