Affiliation:
1. Department of Architecture and Architectural Engineering Yonsei University Seoul Republic of Korea
Abstract
AbstractThis study presents a novel, deep‐learning‐based model for the automated reconstruction of a cross‐sectional drawing from stereo photographs. Targeted cross‐sections captured in stereo photographs are detected and translated into sectional drawings using faster region‐based convolutional neural networks and Pix2Pix generative adversarial network. To address the challenge of perspective correction in the photographs, a novel camera pose optimization method is introduced and employed. This method eliminates the need for camera calibration and image matching, thereby offering greater flexibility in camera positioning and facilitating the use of telephoto lenses while avoiding image‐matching errors. Moreover, synthetic image datasets are used for training to facilitate the practical implementation of the proposed model in construction industry applications, considering the limited availability of open datasets in this field. The applicability of the proposed model was evaluated through experiments conducted on the cross‐sections of curtain wall components. The results demonstrated superior measurement accuracy, compared with those of current methods of laser scanning or camera‐based measurements for construction components.
Subject
Computational Theory and Mathematics,Computer Graphics and Computer-Aided Design,Computer Science Applications,Civil and Structural Engineering,Building and Construction
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献