Affiliation:
1. School of Engineer, Beijing Forestry University, Beijing 100086, China
Abstract
Background. With the advancement of “digital forestry” and “intelligent forestry”, point cloud data have emerged as a powerful tool for accurately capturing three-dimensional forest scenes. It enables the creation and presentation of digital forest systems, facilitates the monitoring of dynamic changes such as forest growth and logging processes, and facilitates the evaluation of forest resource fluctuations. However, forestry point cloud data are characterized by its large volume and the need for time-consuming and labor-intensive manual processing. Deep learning, with its exceptional learning capabilities, holds tremendous potential for processing forestry environment point cloud data. This potential is attributed to the availability of accurately annotated forestry point cloud data and the development of deep learning models specifically designed for forestry applications. Nonetheless, in practical scenarios, conventional direct annotation methods prove to be inefficient and time-consuming due to the complex terrain, dense foliage occlusion, and uneven sparsity of forestry point clouds. Furthermore, directly applying deep learning frameworks to forestry point clouds results in subpar accuracy and performance due to the large size, occlusion, sparsity, and unstructured nature of these scenes. Therefore, the proposal of accurately annotated forestry point cloud datasets and the establishment of semantic segmentation methods tailored for forestry environments hold paramount importance. Methods. A point cloud data annotation method based on single-tree positioning to enhance annotation efficiency was proposed and challenges such as occlusions and sparse distribution in forestry environments were addressed. This method facilitated the construction of a forestry point cloud semantic segmentation dataset, consisting of 1259 scenes and 214.4 billion points, encompassing four distinct categories. The pointDMM framework was introduced, a semantic segmentation framework specifically designed for forestry point clouds. The proposed method first integrates tree features using the DMM module and constructs key segmentation graphs utilizing energy segmentation functions. Subsequently, the cutpursuit algorithm is employed to solve the graph and achieve the pre-segmentation of semantics. The locally extracted forestry point cloud features from the pre-segmentation are comprehensively inputted into the network. Feature fusion is performed using the MLP method of multi-layer features, and ultimately, the point cloud is segmented using the lightweight PointNet. Result. Remarkable segmentation results are demonstrated on the DMM dataset, achieving an accuracy rate of 93% on a large-scale forest environment point cloud dataset known as DMM-3. Compared to other algorithms, the proposed method improves the accuracy of standing tree recognition by 21%. This method exhibits significant advantages in extracting feature information from artificially planted forest point clouds obtained from TLS. It establishes a solid foundation for the automation, intelligence, and informatization of forestry, thereby possessing substantial scientific significance.
Funder
Natural Science Foundation of China
National Key Technology R&D Program of China
Reference39 articles.
1. Determining forest canopy characteristics using airborne laser data;Nelson;Remote Sens. Environ.,1984
2. Automated measurements of terrain reflection and height variations using an airborne infrared laser system;Schreier;Int. J. Remote Sens.,1985
3. Automatic determination of forest inventory parameters using terrestrial laser scanning;Simonse;Proc. Scand Laser Sci. Workshop Airborne Laser Scanning For.,2003
4. Integrating terrestrial and airborne lidar to calibrate a 3D canopy model of effective leaf area index;Hopkinson;Remote Sens. Environ.,2013
5. Golovinskiy, A., Kim, V.G., and Funkhouser, T. (October, January 29). Shape-based recognition of 3D point clouds in urban environments. Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献