Author:
Bae Hyojoon,Golparvar-Fard Mani,White Jules
Abstract
Abstract
Background
Many context-aware techniques have been proposed to deliver cyber-information, such as project specifications or drawings, to on-site users by intelligently interpreting their environment. However, these techniques primarily rely on RF-based location tracking technologies (e.g., GPS or WLAN), which typically do not provide sufficient precision in congested construction sites or require additional hardware and custom mobile devices.
Method
This paper presents a new vision-based mobile augmented reality system that allows field personnel to query and access 3D cyber-information on-site by using photographs taken from standard mobile devices. The system does not require any location tracking modules, external hardware attachments, and/or optical fiducial markers for localizing a user’s position. Rather, the user’s location and orientation are purely derived by comparing images from the user’s mobile device to a 3D point cloud model generated from a set of pre-collected site photographs.
Results
The experimental results show that 1) the underlying 3D reconstruction module of the system generates complete 3D point cloud models of target scene, and is up to 35 times faster than other state-of-the-art Structure-from-Motion (SfM) algorithms, 2) the localization time takes at most few seconds in actual construction site.
Conclusion
The localization speed and empirical accuracy of the system provides the ability to use the system on real-world construction sites. Using an actual construction case study, the perceived benefits and limitations of the proposed method for on-site context-aware applications are discussed in detail.
Publisher
Springer Science and Business Media LLC
Subject
Computer Graphics and Computer-Aided Design,Computer Science Applications,Computer Vision and Pattern Recognition,Engineering (miscellaneous),Modelling and Simulation
Reference43 articles.
1. Akula M, Dong S, Kamat VR, Ojeda L, Borrell A, Borenstein J: Integration of infrastructure based positioning systems and inertial navigation for ubiquitous context-aware engineering applications. Automation in Construction 2011,25(4):640–65.
2. Alahi A, Ortiz R, Vandergheynst P: FREAK: Fast Retina Keypoint. Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2012) 2012, 510–51.
3. Anumba C, Aziz Z: Case studies of intelligent context-aware services delivery in AEC/FM. Lecture Notes in Computer Science 2006, 4200: 23–31. 10.1007/11888598_4
4. Arya S, Mount DM, Netanyahu NS, Silverman R, Wu AY: An optimal algorithm for approximate nearest neighbor searching in fixed dimensions. Journal of the ACM 1998,45(6):891–923. 10.1145/293347.293348
5. Bae H, Golparvar-Fard M, White J: Enhanced HD4AR (Hybrid 4-Dimensional Augmented Reality) for Ubiquitous Context-aware AEC/FM Applications. Proceedings of 12th international conference on construction applications of virtual reality (CONVR 2012) 2012, 253–26.
Cited by
106 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献