Automatic Creation of Semantically Rich 3D Building Models from Laser Scanner Data

Antonio Adan Oliver, Xuehan Xiong, Burcu Akinci, and Daniel Huber
Proceedings of the International Symposium on Automation and Robotics in Construction (ISARC), July, 2011.


Download
  • Adobe portable document format (pdf) (917KB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
Laser scanners are increasingly used to create semantically rich 3D models of buildings for civil engineering applications such as planning renovations, space usage planning, and building maintenance. Currently these models are created manually – a time-consuming and error-prone process. This paper presents a method to automatically convert the raw 3D point data from a laser scanner positioned at multiple locations throughout a building into a compact, semantically rich model. Our algorithm is capable of identifying and modeling the main structural components of an indoor environment (walls, floors, ceilings, windows, and doorways) despite the presence of significant clutter and occlusion, which occur frequently in natural indoor environments. Our method begins by extracting planar patches from a voxelized version of the input point cloud. We use a conditional random field model to learn contextual relationships between patches and use this knowledge to automatically label patches as walls, ceilings, or floors. Then, we perform a detailed analysis of the recognized surfaces to locate windows and doorways. This process uses visibility reasoning to fuse measurements from different scan locations and to identify occluded regions and holes in the surface. Next, we use a learning algorithm to intelligently estimate the shape of window and doorway openings even when partially occluded. Finally, occluded regions on the surfaces are filled in using a 3D inpainting algorithm. We evaluated the method on a large, highly cluttered data set of a building with forty separate rooms yielding promising results.

Keywords
interior modeling, 3D modeling, scan to BIM, lidar, object recognition, wall analysis, opening detection

Notes
Associated Lab(s) / Group(s): 3D Vision and Intelligent Systems Group
Associated Project(s): Automated Reverse Engineering of Buildings, Context-based Recognition of Building Components, Detailed Wall Modeling in Cluttered Environments
Note: This material is based upon work supported by the National Science Foundation under Grant No. 0856558 and by the Pennsylvania Infrastructure Technology Alliance. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. We thank Quantapoint, Inc., for providing experimental data.

Text Reference
Antonio Adan Oliver, Xuehan Xiong, Burcu Akinci, and Daniel Huber, "Automatic Creation of Semantically Rich 3D Building Models from Laser Scanner Data," Proceedings of the International Symposium on Automation and Robotics in Construction (ISARC), July, 2011.

BibTeX Reference
@inproceedings{Adan_Oliver_2011_6859,
   author = "Antonio {Adan Oliver} and Xuehan Xiong and Burcu Akinci and Daniel Huber",
   title = "Automatic Creation of Semantically Rich 3D Building Models from Laser Scanner Data",
   booktitle = "Proceedings of the International Symposium on Automation and Robotics in Construction (ISARC)",
   month = "July",
   year = "2011",
   Notes = "This material is based upon work supported by the National Science Foundation under Grant No. 0856558 and by the Pennsylvania Infrastructure Technology Alliance. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. We thank Quantapoint, Inc., for providing experimental data."
}