Author:
Liu Danping,Zhang Dong,Wang Lei,Wang Jun
Abstract
IntroductionSemantic segmentation is a crucial visual representation learning task for autonomous driving systems, as it enables the perception of surrounding objects and road conditions to ensure safe and efficient navigation.MethodsIn this paper, we present a novel semantic segmentation approach for autonomous driving scenes using a Multi-Scale Adaptive Mechanism (MSAAM). The proposed method addresses the challenges associated with complex driving environments, including large-scale variations, occlusions, and diverse object appearances. Our MSAAM integrates multiple scale features and adaptively selects the most relevant features for precise segmentation. We introduce a novel attention module that incorporates spatial, channel-wise and scale-wise attention mechanisms to effectively enhance the discriminative power of features.ResultsThe experimental results of the model on key objectives in the Cityscapes dataset are: ClassAvg:81.13, mIoU:71.46. The experimental results on comprehensive evaluation metrics are: AUROC:98.79, AP:68.46, FPR95:5.72. The experimental results in terms of computational cost are: GFLOPs:2117.01, Infer. Time (ms):61.06. All experimental results data are superior to the comparative method model.DiscussionThe proposed method achieves superior performance compared to state-of-the-art techniques on several benchmark datasets demonstrating its efficacy in addressing the challenges of autonomous driving scene understanding.
Reference47 articles.
1. Neurobiology of executive functions: catecholamine influences on prefrontal cortical functions;Arnsten;Biol. Psychiatry,2005
2. Neural machine translation by jointly learning to align and translate;Bahdanau,2015
3. Simultaneous semantic segmentation and outlier detection in presence of domain shift;Bevandic´,2019
4. The fishyscapes benchmark: measuring blind spots in semantic segmentation;Blum;Int. J. Comput. Vis.,2021
5. Perception and communication.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献