Active Vision: Autonomous Aerial Cinematography with Learned Artistic Decision-Making - Robotics Institute Carnegie Mellon University

Active Vision: Autonomous Aerial Cinematography with Learned Artistic Decision-Making

PhD Thesis, Tech. Report, CMU-RI-TR-21-16, Robotics Institute, Carnegie Mellon University, May, 2021

Abstract

Aerial cinematography is revolutionizing industries that require live and dynamic camera viewpoints such as entertainment, sports, and security. Fundamentally, it is a tool with immense potential to improve human creativity, expressiveness, and sharing of experiences. However, safely piloting a drone while filming a moving target in the presence of obstacles is immensely taxing, often requiring multiple highly trained human operators to safely control a single vehicle. Our research focus is to build autonomous systems that can empower any individual with the full artistic capabilities of aerial cameras. We develop a system for active vision: in other words, one that not only passively processes the incoming sensor feed, but on the contrary, actively reasons about the cinematographic quality of viewpoints and safely generate sequences of shots. The theory and systems developed in this work can impact video generation for both real-world and simulated environments, such as professional and amateur movie-making, videogames, and virtual reality.

First, we formalize the theory behind the aerial filming problem by incorporating cinematography guidelines into robot motion planning. We describe the problem in terms of its principal cost functions, and develop an efficient trajectory optimization framework for executing arbitrary types of shots while avoiding collisions and occlusions with obstacles.

Second, we propose and develop a system for aerial cinematography in the wild. We combine several components into a real-time framework: vision-based target estimation, 3D signed-distance mapping for collision and occlusion avoidance, and trajectory optimization for camera motion. We extensively evaluate our system both in simulation and in field experiments by filming dynamic targets moving through unstructured environments.

Third, we take a step towards learning the intangible art of cinematography. We all know a good clip when we see it - but we cannot yet objectively specify a formula. We propose the use of deep reinforcement learning with a human evaluator in the loop to guide the selection of artistic shots, and show that the learned policies can incorporate intuitive concepts of human aesthetics. Next, we develop novel data-driven framework to enable direct user control of camera positioning parameters in an intuitive learned semantic space (e.g. calm, enjoyable, establishing), and show its effectiveness in a series of user studies.

Lastly, we take the first steps towards the concept of multi-camera collaboration for filming. The use of multiple simultaneous viewpoints is necessary when capturing real-world scenes such as sports or social events. In these situations it is difficult to capture the optimal viewpoint at all times employing a single aerial camera, specially because the events cannot be reenacted for additional takes. Here, we design motion planning algorithms for multi-camera cinematography that are able to maximize the quality of multiple video streams simultaneously using limited onboard resources.

BibTeX

@phdthesis{Bonatti-2021-127360,
author = {Rogerio Bonatti},
title = {Active Vision: Autonomous Aerial Cinematography with Learned Artistic Decision-Making},
year = {2021},
month = {May},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-21-16},
keywords = {Cinematography, UAV, Motion Planning, Machine Learning},
}