Vehicle Ego-Trajectory Segmentation Using Guidance Cues
-
Published:2024-09-03
Issue:17
Volume:14
Page:7776
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Mihalea Andrei1, Florea Adina Magda1ORCID
Affiliation:
1. Faculty of Automatic Control and Computer Science, National University of Science and Technology POLITEHNICA Bucharest, 60042 Bucharest, Romania
Abstract
Computer vision has significantly influenced recent advancements in autonomous driving by providing cutting-edge solutions for various challenges, including object detection, semantic segmentation, and comprehensive scene understanding. One specific challenge is ego-vehicle trajectory segmentation, which involves learning the vehicle’s path and describing it with a segmentation map. This can play an important role in both autonomous driving and advanced driver assistance systems, as it enhances the accuracy of perceiving and forecasting the vehicle’s movements across different driving scenarios. In this work, we propose a deep learning approach for ego-trajectory segmentation that leverages a state-of-the-art segmentation network augmented with guidance cues provided through various merging mechanisms. These mechanisms are designed to direct the vehicle’s path as intended, utilizing training data obtained with a self-supervised approach. Our results demonstrate the feasibility of using self-supervised labels for ego-trajectory segmentation and embedding directional intentions within the network’s decisions through image and guidance input concatenation, feature concatenation, or cross-attention between pixel features and various types of guidance cues. We also analyze the effectiveness of our approach in constraining the segmentation outputs and prove that our proposed improvements bring major boosts in the segmentation metrics, increasing IoU by more than 12% and 5% compared with our two baseline models. This work paves the way for further exploration into ego-trajectory segmentation methods aimed at better predicting the behavior of autonomous vehicles.
Funder
European Union’s Horizon Europe research and innovation programme
Reference55 articles.
1. Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L.D., Monfort, M., Muller, U., and Zhang, J. (2016). End to end learning for self-driving cars. arXiv. 2. Li, Z., Yu, Z., Lan, S., Li, J., Kautz, J., Lu, T., and Alvarez, J.M. (2024, January 17–21). Is Ego Status All You Need for Open-Loop End-to-End Autonomous Driving?. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA. 3. Iftikhar, S., Zhang, Z., Asim, M., Muthanna, A., Koucheryavy, A., and Abd El-Latif, A.A. (2022). Deep Learning-Based Pedestrian Detection in Autonomous Vehicles: Substantial Issues and Challenges. Electronics, 11. 4. Dasgupta, K., Das, A., Das, S., Bhattacharya, U., and Yogamani, S.K. (2021). Spatio-Contextual Deep Network Based Multimodal Pedestrian Detection for Autonomous Driving. arXiv. 5. Unsupervised obstacle detection in driving environments using deep-learning-based stereovision;Dairi;Robot. Auton. Syst.,2018
|
|