3D Reconstruction of Anatomical Structures from Endoscopic Images

Chenyu Wu
doctoral dissertation, tech. report CMU-RI-TR-10-04, Robotics Institute, Carnegie Mellon University, January, 2010


Download
  • Adobe portable document format (pdf) (12MB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
Endoscopy is attracting increasing attention for its role in minimally invasive, computer-assisted and tele-surgery. Analyzing images from endoscopes to obtain meaningful information about anatomical structures such as their 3D shapes, deformations and appearances, is crucial to such surgical applications. However, 3D reconstruction of bones from endoscopic images is challenging due to the small field of view of the endoscope, large image distortion, featureless surfaces and occlusion by blood and particles. In this thesis, a novel methodology is developed for accurate 3D bone reconstruction from endoscopic images, by exploiting and enhancing computer vision techniques such as shape from shading, tracking and statistical modeling.

We first designed a complete calibration scheme to estimate both geometric and photometric parameters including the rotation angle, light intensity and light sources’ spatial distribution. This is crucial to our further analysis of endoscopic images. A solution is presented to reconstruct the Lambertian surface of bones using a sequence of overlapped endoscopic images, where only partial boundaries are visible in each image. We extend the classical shape-from-shading approach to deal with perspective projection and near point light sources that are not co-located with the camera center. Then, by tracking the endoscope, the complete occluding boundary of the bone is obtained by aligning the partial boundaries from different images. A complete and consistent shape is obtained by simultaneously growing the surface normals and depths in all views. Finally, in order to deal with over-smoothness and occlusions, we employ a statistical atlas to constrain and refine the multi-view shape from shading. A two-level framework is also developed for efficient atlas construction.

Notes
Associated Center(s) / Consortia: Vision and Autonomous Systems Center
Associated Lab(s) / Group(s): Computational Symmetry
Associated Project(s): Near Regular Texture -- Analysis, Synthesis and Manipulation
Number of pages: 131

Text Reference
Chenyu Wu, "3D Reconstruction of Anatomical Structures from Endoscopic Images," doctoral dissertation, tech. report CMU-RI-TR-10-04, Robotics Institute, Carnegie Mellon University, January, 2010

BibTeX Reference
@phdthesis{Wu_2010_6523,
   author = "Chenyu Wu",
   title = "3D Reconstruction of Anatomical Structures from Endoscopic Images",
   booktitle = "",
   school = "Robotics Institute, Carnegie Mellon University",
   month = "January",
   year = "2010",
   number= "CMU-RI-TR-10-04",
   address= "Pittsburgh, PA",
}