Affiliation:
1. Memorial University of Newfoundland, Canada
2. National Research Council of Canada Flight Research Laboratory, Canada
Abstract
This work describes a deep learning-based autonomous landing zone identification module for a vertical takeoff and landing vehicle. The proposed module is developed using LiDAR point cloud data and can be integrated into a visual LiDAR odometry and mapping pipeline implemented in the vehicle. “ConvPoint,” the top-performing neural network architecture in an online point cloud segmentation benchmark leaderboard at the time of writing, was chosen as the reference architecture. Semantic labeling of the datasets was done using the terrain geometry characteristics and manual adjustment of labels through visual observation. Point clouds captured by the Memorial University and online point cloud datasets were used to transfer-learn the neural network model and to evaluate the accuracy-runtime trade-off for the proposed pipeline. The selected neural network model generated accuracy values of 89.7% and 92.1% on two selected datasets, while it computed 3940.15 points per second and 3633.85 points per second to predict landing zone labels, respectively. Hyperparameter tuning was carried out to obtain a higher throughput with an update rate of 1 Hz for the landing zone map of the point cloud inputs from the visual LiDAR odometry and mapping pipeline. The proposed system is validated by evaluating its performance on three variations of point clouds. The results validate the accuracy-runtime trade-off of the proposed system and show that further optimization can improve performance.
Publisher
Canadian Science Publishing
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献