Markerless Human Motion Transfer - Robotics Institute Carnegie Mellon University

Markerless Human Motion Transfer

Conference Paper, Proceedings of 2nd International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT '04), pp. 373 - 378, September, 2004

Abstract

In this paper we develop a computer vision-based system to transfer human motion from one subject to another. Our system uses a network of eight calibrated and synchronized cameras. We first build detailed kinematic models of the subjects based on our algorithms for extracting shape from silhouette across time. These models are then used to capture the motion (joint angles) of the subjects in new video sequences. Finally we describe an image-based rendering algorithm to render the captured motion applied to the articulated model of another person. Our rendering algorithm uses an ensemble of spatially and temporally distributed images to generate photo-realistic video of the transferred motion. We demonstrate the performance of the system by rendering throwing and kungfu motions on subjects who did not perform them.

BibTeX

@conference{Cheung-2004-9014,
author = {Kong Man Cheung and Simon Baker and Jessica K. Hodgins and Takeo Kanade},
title = {Markerless Human Motion Transfer},
booktitle = {Proceedings of 2nd International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT '04)},
year = {2004},
month = {September},
pages = {373 - 378},
}