Simple Scalable Multimodal Semantic Segmentation Model
Author:
Zhu Yuchang1, Xiao Nanfeng1
Affiliation:
1. School of Computer Science & Engineering, South China University of Technology, Guangzhou 510006, China
Abstract
Visual perception is a crucial component of autonomous driving systems. Traditional approaches for autonomous driving visual perception often rely on single-modal methods, and semantic segmentation tasks are accomplished by inputting RGB images. However, for semantic segmentation tasks in autonomous driving visual perception, a more effective strategy involves leveraging multiple modalities, which is because different sensors of the autonomous driving system bring diverse information, and the complementary features among different modalities enhance the robustness of the semantic segmentation modal. Contrary to the intuitive belief that more modalities lead to better accuracy, our research reveals that adding modalities to traditional semantic segmentation models can sometimes decrease precision. Inspired by the residual thinking concept, we propose a multimodal visual perception model which is capable of maintaining or even improving accuracy with the addition of any modality. Our approach is straightforward, using RGB as the main branch and employing the same feature extraction backbone for other modal branches. The modals score module (MSM) evaluates channel and spatial scores of all modality features, measuring their importance for overall semantic segmentation. Subsequently, the modal branches provide additional features to the RGB main branch through the features complementary module (FCM). Leveraging the residual thinking concept further enhances the feature extraction capabilities of all the branches. Through extensive experiments, we derived several conclusions. The integration of certain modalities into traditional semantic segmentation models tends to result in a decline in segmentation accuracy. In contrast, our proposed simple and scalable multimodal model demonstrates the ability to maintain segmentation precision when accommodating any additional modality. Moreover, our approach surpasses some state-of-the-art multimodal semantic segmentation models. Additionally, we conducted ablation experiments on the proposed model, confirming that the application of the proposed MSM, FCM, and the incorporation of residual thinking contribute significantly to the enhancement of the model.
Reference45 articles.
1. Briot, A., Viswanath, P., and Yogamani, S. (2018, January 18–22). Analysis of Efficient CNN Design Techniques for Semantic Segmentation. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA. 2. Wu, Z., Shen, C., and van den Hengel, A. (2017). Real-time semantic image segmentation via spatial sparsity. arXiv. 3. Paszke, A., Chaurasia, A., Kim, S., and Culurciello, E. (2016). Enet: A deep neural network architecture for real-time semantic segmentation. arXiv. 4. Siam, M., Gamal, M., Abdel-Razek, M., Yogamani, S., Jagersand, M., and Zhang, H. (2018, January 18–22). A Comparative Study of Real-time Semantic Segmentation for Autonomous Driving. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA. 5. Cao, J., Leng, H., Lischinski, D., Cohen-Or, D., Tu, C., and Li, Y. (2021, January 10–17). Shapeconv: Shape-aware convolutional layer for indoor rgb-d semantic segmentation. Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada.
|
|