Author:
Yang Menglong,Nagao Katashi
Abstract
The aim of this paper is to digitize the environments in which humans live, at low cost, and reconstruct highly accurate three-dimensional environments that are based on those in the real world. This three-dimensional content can be used such as for virtual reality environments and three-dimensional maps for automatic driving systems. In general, however, a three-dimensional environment must be carefully reconstructed by manually moving the sensors used to first scan the real environment on which the three-dimensional one is based. This is done so that every corner of an entire area can be measured, but time and costs increase as the area expands. Therefore, a system that creates three-dimensional content that is based on real-world large-scale buildings at low cost is proposed. This involves automatically scanning the indoors with a mobile robot that uses low-cost sensors and generating 3D point clouds. When the robot reaches an appropriate measurement position, it collects the three-dimensional data of shapes observable from that position by using a 3D sensor and 360-degree panoramic camera. The problem of determining an appropriate measurement position is called the “next best view problem,” and it is difficult to solve in a complicated indoor environment. To deal with this problem, a deep reinforcement learning method is employed. It combines reinforcement learning, with which an autonomous agent learns strategies for selecting behavior, and deep learning done using a neural network. As a result, 3D point cloud data can be generated with better quality than the conventional rule-based approach.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献