GNN3DMOT: Graph Neural Network for 3D Multi-Object Tracking With 2D-3D Multi-Feature Learning - Robotics Institute Carnegie Mellon University

GNN3DMOT: Graph Neural Network for 3D Multi-Object Tracking With 2D-3D Multi-Feature Learning

Conference Paper, Proceedings of (CVPR) Computer Vision and Pattern Recognition, pp. 6499 - 6508, June, 2020

Abstract

3D Multi-object tracking (MOT) is crucial to autonomous systems. Recent work uses a standard tracking-by-detection pipeline, where feature extraction is first performed independently for each object in order to compute an affinity matrix. Then the affinity matrix is passed to the Hungarian algorithm for data association. A key process of this standard pipeline is to learn discriminative features for different objects in order to reduce confusion during data association. In this work, we propose two techniques to improve the discriminative feature learning for MOT: (1) instead of obtaining features for each object independently, we propose a novel feature interaction mechanism by introducing the Graph Neural Network. As a result, the feature of one object is informed of the features of other objects so that the object feature can lean towards the object with similar feature (i.e., object probably with a same ID) and deviate from objects with dissimilar features (i.e., object probably with different IDs), leading to a more discriminative feature for each object; (2) instead of obtaining the feature from either 2D or 3D space in prior work, we propose a novel joint feature extractor to learn appearance and motion features from 2D and 3D space simultaneously. As features from different modalities often have complementary information, the joint feature can be more discriminate than feature from each individual modality. To ensure that the joint feature extractor does not heavily rely on one modality, we also propose an ensemble training paradigm. Through extensive evaluation, our proposed method achieves stateof-the-art performance on KITTI and nuScenes 3D MOT benchmarks. Our code will be made available at https: //github.com/xinshuoweng/GNN3DMOT

BibTeX

@conference{Weng-2020-122751,
author = {Xinshuo Weng and Yongxin Wang and Yunze Man and Kris Kitani},
title = {GNN3DMOT: Graph Neural Network for 3D Multi-Object Tracking With 2D-3D Multi-Feature Learning},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2020},
month = {June},
pages = {6499 - 6508},
}