Carnegie Mellon University
Visual Hull Construction, Alignment and Refinement Across Time

Kong Man Cheung
tech. report CMU-RI-TR-02-05, Robotics Institute, Carnegie Mellon University, January, 2002

  • Adobe portable document format (pdf) (2MB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Visual Hull (VH) construction is a popular method of shape estimation. The method, also known as Shape from Silhouette (SFS), approximates shape of an object from multiple silhouette images by constructing an upper bound of the shape called the Visual Hull. SFS is used in many applications such as non-invasive 3D object digitization, 3D object recognition and more recently human motion tracking and analysis. Though SFS is straightforward to implement and easy to use, it has several limitations. Most existing SFS methods are too slow for real-time applications and the estimated shape is sensitive to silhouette noise and camera calibration errors. Moreover, VH is only a conservative approximation of the actual shape of the object and the approximation can be very coarse when there are only a few cameras. In my thesis, I propose to investigate some of these shortcomings and suggest solutions to overcome them. First, a voxel-based real-time SFS algorithm called SPOT is proposed and its behavior under noisy silhouette images is analyzed. Secondly, the conservative nature of SFS is improved by incorporating silhouette images across time. The improvement is achieved by first estimating the rigid motions between visual hulls formed at different time instants (visual hull alignment) and then combining them (visual hull refinement) to get a tighter bound of the object's shape. The ambiguity issue of visual hull alignment is identified and addressed. This study is presented here in 2D. In the thesis I proposed to extend it to 3D objects. Thirdly, an algorithm that uses color consistency to resolve alignment ambiguity problem is proposed. This algorithm constructs entities called bounding edges of the VH and utilizes the Fundamental Theorem of Visual Hull to find points on the surface of the object. Using an idea similar to image alignment, these surface points are used to find the motion between two visual hulls. Once the rigid motion across time is known, all the silhouette images are treated as being captured at the same time instant and the shape of the object is refined. The algorithm is validated by both synthetic and real data. Finally the advantages and disadvantages of representing VH by three different ways : bounding cones intersection, voxels and bounding edges are discussed.

3D Recnstruction, Multiple View, Visual Hull, Shape From Silhouettes, Temporal Reconstruction, Motion Estimation, Stereo

Associated Center(s) / Consortia: Vision and Autonomous Systems Center
Associated Lab(s) / Group(s): Vision for Virtual Environments, Virtualized RealityTM, Human Identification at a Distance
Associated Project(s): Virtualized RealityTM
Note: Thesis Proposal

Text Reference
Kong Man Cheung, "Visual Hull Construction, Alignment and Refinement Across Time," tech. report CMU-RI-TR-02-05, Robotics Institute, Carnegie Mellon University, January, 2002

BibTeX Reference
   author = "Kong Man Cheung",
   title = "Visual Hull Construction, Alignment and Refinement Across Time",
   booktitle = "",
   institution = "Robotics Institute",
   month = "January",
   year = "2002",
   number= "CMU-RI-TR-02-05",
   address= "Pittsburgh, PA",
   Notes = "Thesis Proposal"