Towards Photorealistic Dynamic Capture and Animation of Human Hair and Head - Robotics Institute Carnegie Mellon University

Towards Photorealistic Dynamic Capture and Animation of Human Hair and Head

PhD Thesis, Tech. Report, CMU-RI-TR-23-77, September, 2023

Abstract

Realistic human avatars play a key role in immersive virtual telepresence. A human avatar needs to faithfully reflect human appearance to reach a high level of realism. Existing works have made significant progress in building drivable realistic face avatars, but they rarely include realistic dynamic hair despite its importance in human appearance. In pursuit of drivable, realistic human avatars with dynamic hair, we focus on the problem of automatically capturing and animating hair from multi-view videos.

We first look into the problem of capturing the motion of the head with near-static hair. Because the hair has complex geometry, we use a neural volumetric representation that can be rendered efficiently. As a result, we achieve photorealistic capture of complex hairstyles by optimizing the representation with the gradient from the reconstruction loss on 2D via differentiable volumetric rendering.

Then we extend the problem to capturing hair with dynamics. In the next step, we attach volumetric primitives to the tracked hair strands to learn the fine-level appearance and geometry via differentiable rendering. We further design a differentiable volumetric rendering algorithm with the optical flow to ensure temporal smoothness at a fine level. As a result, we achieve robust dynamic capture of hair with large motions.

We then address the problem of building a hair dynamic model for generating novel animation. We present a two-stage pipeline to build a hair dynamic model in a data-driven manner. The first stage performs hair state compression using an autoencoder-as-a-tracker strategy. The second stage learns a hair dynamic model in a supervised manner using the hair state data from the first stage. The hair dynamic model enables in-the-wild animation of hair that performs hair state transitions conditioned on head motions and head relative gravity direction.

In parallel to capturing and animating specific hairstyles, we explore the problem of how to efficiently capture diverse hair appearances. Hair plays a significant role in personal identity and the efficient creation of personalized avatars with decent hair is essential to individual usages. To handle the large intra-class variance in hair appearance and geometry, we present a universal hair appearance model that focuses on the similarity between different hairstyles in a local region. The model takes 3D-aligned features as input and learns a unified manifold of local hair appearance that adaptively generates appearance for hairstyles with diverse topologies.

BibTeX

@phdthesis{Wang-2023-138428,
author = {Ziyan Wang},
title = {Towards Photorealistic Dynamic Capture and Animation of Human Hair and Head},
year = {2023},
month = {September},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-23-77},
keywords = {Neural Rendering, Dynamic Capture, Human Modeling, Animation},
}