Abstract
Recently released research about deep learning applications related to perception for autonomous driving focuses heavily on the usage of LiDAR point cloud data as input for the neural networks, highlighting the importance of LiDAR technology in the field of Autonomous Driving (AD). In this sense, a great percentage of the vehicle platforms used to create the datasets released for the development of these neural networks, as well as some AD commercial solutions available on the market, heavily invest in an array of sensors, including a large number of sensors as well as several sensor modalities. However, these costs create a barrier to entry for low-cost solutions for the performance of critical perception tasks such as Object Detection and SLAM. This paper explores current vehicle platforms and proposes a low-cost, LiDAR-based test vehicle platform capable of running critical perception tasks (Object Detection and SLAM) in real time. Additionally, we propose the creation of a deep learning-based inference model for Object Detection deployed in a resource-constrained device, as well as a graph-based SLAM implementation, providing important considerations, explored while taking into account the real-time processing requirement and presenting relevant results demonstrating the usability of the developed work in the context of the proposed low-cost platform.
Funder
European Structural and Investment Funds in the FEDER component, through the Op-erational Competitiveness and Internationalization Programme
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference37 articles.
1. Self-Driving Cars Must Reduce Traffic Fatalities by at Least 75 Percent to Stay on the Roadshttps://www.sciencedaily.com/releases/2018/05/180530132959.html
2. Preparing a nation for autonomous vehicles: Opportunities, barriers and policy recommendations for capitalizing on self-driven vehicles;Fagnant;Transp. Res. Part A,2015
3. Are we ready for autonomous driving? The KITTI vision benchmark suite
4. LIBRE: The Multiple 3D LiDAR Dataset;Carballo;arXiv,2020
5. Laser-induced damage threshold of camera sensors and micro-optoelectromechanical systems
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Research on Automotive LiDAR Performance Based on Simulated Filtering Algorithm;2024 IEEE 4th International Conference on Electronic Technology, Communication and Information (ICETCI);2024-05-24
2. An Extrinsic Calibration Method between LiDAR and GNSS/INS for Autonomous Driving;2024 IEEE International Conference on Robotics and Automation (ICRA);2024-05-13
3. VAC:Enhanced Visual-LIDAR Fusion SLAM Framework using Square Marker;2024 7th International Conference on Advanced Algorithms and Control Engineering (ICAACE);2024-03-01
4. LeGO-LOAM-FN: An Improved Simultaneous Localization and Mapping Method Fusing LeGO-LOAM, Faster_GICP and NDT in Complex Orchard Environments;Sensors;2024-01-16
5. LiDAR Point Clouds in Autonomous Driving Integrated with Deep Learning: A Tech Prospect;2024 Fourth International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT);2024-01-11