Learning from Synthetic Point Cloud Data for Historical Buildings Semantic Segmentation

Author:

Morbidoni Christian1,Pierdicca Roberto2,Paolanti Marina1,Quattrini Ramona2,Mammoli Raissa2

Affiliation:

1. Universitá Politecnica delle Marche (DII), Ancona, IT, ITALY

2. Universitá Politecnica delle Marche (DICEA), Ancona, IT, ITALY

Abstract

Historical heritage is demanding robust pipelines for obtaining Heritage Building Information Modeling models that are fully interoperable and rich in their informative content. The definition of efficient Scan-to-BIM workflows represent a very important step toward a more efficient management of the historical real estate, as creating structured three-dimensional (3D) models from point clouds is complex and time-consuming. In this scenario, semantic segmentation of 3D Point Clouds is gaining more and more attention, since it might help to automatically recognize historical architectural elements. The way paved by recent Deep Learning approaches proved to provide reliable and affordable degrees of automation in other contexts, as road scenes understanding. However, semantic segmentation is particularly challenging in historical and classical architecture, due to the shapes complexity and the limited repeatability of elements across different buildings, which makes it difficult to define common patterns within the same class of elements. Furthermore, as Deep Learning models requires a considerably large amount of annotated data to be trained and tuned to properly handle unseen scenes, the lack of (big) publicly available annotated point clouds in the historical building domain is a huge problem, which in fact blocks the research in this direction. However, creating a critical mass of annotated point clouds by manual annotation is very time-consuming and impractical. To tackle this issue, in this work we explore the idea of leveraging synthetic point cloud data to train Deep Learning models to perform semantic segmentation of point clouds obtained via Terrestrial Laser Scanning. The aim is to provide a first assessment of the use of synthetic data to drive Deep Learning--based semantic segmentation in the context of historical buildings. To achieve this purpose, we present an improved version of the Dynamic Graph CNN (DGCNN) named RadDGCNN. The main improvement consists on exploiting the radius distance. In our experiments, we evaluate the trained models on synthetic dataset (publicly available) about two different historical buildings: the Ducal Palace in Urbino, Italy, and Palazzo Ferretti in Ancona, Italy. RadDGCNN yields good results, demonstrating improved segmentation performances on the TLS real datasets.

Funder

CIVITAS

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Graphics and Computer-Aided Design,Computer Science Applications,Information Systems,Conservation

Reference37 articles.

1. The palladiolibrary geo-models: An open 3D archive to manage and visualize information-communication resources about palladio. International Archives of the Photogrammetry;Apollonio Fabrizio Ivan;Remote Sensing and Spatial Information Sciences - ISPRS Archives,2013

2. BIM-based modeling and data enrichment of classical architectural buildings. Sci;Apollonio Fabrizio Ivan;Res. Inf. Technol.,2012

3. 3D Semantic Parsing of Large-Scale Indoor Spaces

4. Matan Atzmon Haggai Maron and Yaron Lipman. 2018. Point convolutional neural networks by extension operators. arxiv:1803.10091. Retrieved from http://arxiv.org/abs/1803.10091. Matan Atzmon Haggai Maron and Yaron Lipman. 2018. Point convolutional neural networks by extension operators. arxiv:1803.10091. Retrieved from http://arxiv.org/abs/1803.10091.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3