Affiliation:
1. College of Information Technology, United Arab Emirates University, Al Ain P.O. Box 15551, United Arab Emirates
2. Emirates Center for Mobility Research, United Arab Emirates University, Al Ain P.O. Box 15551, United Arab Emirates
Abstract
Autonomous vehicles (AVs) are predicted to change transportation; however, it is still difficult to maintain robust situation awareness in a variety of driving situations. To enhance AV perception, methods to integrate sensor data from the camera, radar, and LiDAR sensors have been proposed. However, due to rigidity in their fusion implementations, current techniques are not sufficiently robust in challenging driving scenarios (such as inclement weather, poor light, and sensor obstruction). These techniques can be divided into two main groups: (i) early fusion, which is ineffective when sensor data are distorted or noisy, and (ii) late fusion, which is unable to take advantage of characteristics from numerous sensors and hence yields sub-optimal estimates. In this paper, we suggest a flexible selective sensor fusion framework that learns to recognize the present driving environment and fuses the optimum sensor combinations to enhance robustness without sacrificing efficiency to overcome the above-mentioned limitations. The proposed framework dynamically simulates early fusion, late fusion, and mixtures of both, allowing for a quick decision on the best fusion approach. The framework includes versatile modules for pre-processing heterogeneous data such as numeric, alphanumeric, image, and audio data, selecting appropriate features, and efficiently fusing the selected features. Further, versatile object detection and classification models are proposed to detect and categorize objects accurately. Advanced ensembling, gating, and filtering techniques are introduced to select the optimal object detection and classification model. Further, innovative methodologies are proposed to create an accurate context and decision rules. Widely used datasets like KITTI, nuScenes, and RADIATE are used in experimental analysis to evaluate the proposed models. The proposed model performed well in both data-level and decision-level fusion activities and also outperformed other fusion models in terms of accuracy and efficiency.
Funder
Emirates Center for Mobility Research of the United Arab Emirates University
ASPIRE Award for Research Excellence
Subject
General Earth and Planetary Sciences
Reference41 articles.
1. Rosique, F., Navarro, P.J., Fernández, C., and Padilla, A. (2019). A systematic review of perception system and simulators for autonomous vehicles research. Sensors, 19.
2. Pendleton, S.D., Andersen, H., Du, X., Shen, X., Meghjani, M., Eng, Y.H., Rus, D., and Ang Jr, M.H. (2017). Perception, planning, control, and coordination for autonomous vehicles. Machines, 5.
3. A review and comparative study on probabilistic object detection in autonomous driving;Feng;IEEE Trans. Intell. Transp. Syst.,2021
4. A human-like decision intelligence for obstacle avoidance in autonomous vehicle parking;Nakrani;Appl. Intell.,2022
5. Gupta, S., and Snigdh, I. (2022). Autonomous and Connected Heavy Vehicle Technology, Elsevier.
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献