Automatic Creation of Semantically Rich 3D Building Models from Laser Scanner Data

Xuehan Xiong, Antonio Adan Oliver, Burcu Akinci, and Daniel Huber
Automation in Construction, Vol. 31, May, 2013, pp. 325-337 .


Download
  • Adobe portable document format (pdf) (5MB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
In the Architecture, Engineering, and Construction (AEC) domain, semantically rich 3D information models are increasingly used throughout a facility's life cycle for diverse applications, such as planning renovations, space usage planning, and managing building maintenance. These models, which are known as building information models (BIMs), are often constructed using dense, three dimensional (3D) point measurements obtained from laser scanners. Laser scanners can rapidly capture the ``as-is'' conditions of a facility, which may differ significantly from the design drawings. Currently, the conversion from laser scan data to BIM is primarily a manual operation, and it is labor-intensive and can be error-prone. This paper presents a method to automatically convert the raw 3D point data from a laser scanner positioned at multiple locations throughout a facility into a compact, semantically rich information model. Our algorithm is capable of identifying and modeling the main visible structural components of an indoor environment (walls, floors, ceilings, windows, and doorways) despite the presence of significant clutter and occlusion, which occur frequently in natural indoor environments. Our method begins by extracting planar patches from a voxelized version of the input point cloud. The algorithm learns the unique features of different types of surfaces and the contextual relationships between them and uses this knowledge to automatically label patches as walls, ceilings, or floors. Then, we perform a detailed analysis of the recognized surfaces to locate openings, such as windows and doorways. This process uses visibility reasoning to fuse measurements from different scan locations and to identify occluded regions and holes in the surface. Next, we use a learning algorithm to intelligently estimate the shape of window and doorway openings even when partially occluded. Finally, occluded surface regions are filled in using a 3D inpainting algorithm. We evaluated the method on a large, highly cluttered data set of a building with forty separate rooms.

Keywords
interior modeling, 3D modeling, scan to BIM, lidar, object recognition, wall analysis, opening detection

Notes
Associated Lab(s) / Group(s): 3D Vision and Intelligent Systems Group
Associated Project(s): Context-based Recognition of Building Components
Note: DOI: 10.1016/j.autcon.2012.10.006

Text Reference
Xuehan Xiong, Antonio Adan Oliver, Burcu Akinci, and Daniel Huber, "Automatic Creation of Semantically Rich 3D Building Models from Laser Scanner Data," Automation in Construction, Vol. 31, May, 2013, pp. 325-337 .

BibTeX Reference
@article{Xiong_2013_7402,
   author = "Xuehan Xiong and Antonio {Adan Oliver} and Burcu Akinci and Daniel Huber",
   title = "Automatic Creation of Semantically Rich 3D Building Models from Laser Scanner Data",
   journal = "Automation in Construction",
   pages = "325-337 ",
   month = "May",
   year = "2013",
   volume = "31",
   Notes = "DOI: 10.1016/j.autcon.2012.10.006"
}