Affiliation:
1. School of Geospatial Engineering and Science Sun Yat‐sen University Zhuhai China
2. Fujian Key Laboratory of Sensing and Computing for Smart Cities Xiamen University Xiamen China
Abstract
AbstractThe point cloud semantic understanding task has made remarkable progress along with the development of 3D deep learning. However, aggregating spatial information to improve the local feature learning capability of the network remains a major challenge. Many methods have been used for improving local information learning, such as constructing a multi‐area structure for capturing different area information. However, it will lose some local information due to the independent learning point feature. To solve this problem, a new network is proposed that considers the importance of the differences between points in the neighbourhood. Capturing local feature information can be enhanced by highlighting the different feature importance of the point cloud in the neighbourhood. First, T‐Net is constructed to learn the point cloud transformation matrix for point cloud disorder. Second, transformer is used to improve the problem of local information loss due to the independence of each point in the neighbourhood. The experimental results show that 92.2% accuracy overall was achieved on the ModelNet40 dataset and 93.8% accuracy overall was achieved on the ModelNet10 dataset.
Funder
National Natural Science Foundation of China
Subject
Earth and Planetary Sciences (miscellaneous),Computers in Earth Sciences,Computer Science Applications,Engineering (miscellaneous)
Reference32 articles.
1. A semantic segmentation method for vehicle‐borne laser scanning point clouds in motorway scenes
2. General-Purpose Deep Point Cloud Feature Extractor
3. Dosovitskiy A. Beyer L. Kolesnikov A. Weissenborn D. Zhai X. Unterthiner T.et al. (2020)An image is worth 16×16 words: transformers for image recognition at scale.Arxiv[Preprint]. Available from:https://doi.org/10.48550/arXiv.2010.11929
4. Masked Autoencoders Are Scalable Vision Learners
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献