An Adaptive Semantic Segmentation Network for Adversarial Learning Domain Based on Low-Light Enhancement and Decoupled Generation
-
Published:2024-04-13
Issue:8
Volume:14
Page:3295
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Wang Meng1ORCID, Zhang Zhuoran1, Liu Haipeng2ORCID
Affiliation:
1. Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China 2. Yunnan Key Laboratory of Artificial Intelligence, Kunming University of Science and Technology, Kunming 650500, China
Abstract
Nighttime semantic segmentation due to issues such as low contrast, fuzzy imaging, and low-quality annotation results in significant degradation of masks. In this paper, we introduce a domain adaptive approach for nighttime semantic segmentation that overcomes the reliance on low-light image annotations to transfer the source domain model to the target domain. On the front end, a low-light image enhancement sub-network combining lightweight deep learning with mapping curve iteration is adopted to enhance nighttime foreground contrast. In the segmentation network, the body generation and edge preservation branches are implemented to generate consistent representations within the same semantic region. Additionally, a pixel weighting strategy is embedded to increase the prediction accuracy for small targets. During the training, a discriminator is implemented to distinguish features between the source and target domains, thereby guiding the segmentation network for adversarial transfer learning. The proposed approach’s effectiveness is verified through testing on Dark Zurich, Nighttime Driving, and CityScapes, including evaluations of mIoU, PSNR, and SSIM. They confirm that our approach surpasses existing baselines in segmentation scenarios.
Funder
National Natural Science Foundation of China Yunnan Provincial Science and Technology Plan Project Faculty of Information Engineering and Automation, Kunming University of Science and Technology
Reference64 articles.
1. Geiger, A., Lenz, P., and Urtasun, R. (2012, January 16–21). Are we ready for autonomous driving? The kitti vision benchmark suite. Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA. 2. Chen, X., Williams, B.M., Vallabhaneni, S.R., Czanner, G., Williams, R., and Zheng, Y. (2019, January 15–20). Learning active contour models for medical image segmentation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA. 3. Zhang, X., Chen, Y., Zhu, B., Wang, J., and Tang, M. (2020, January 13–19). Part-aware context network for human parsing. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA. 4. Sakaridis, C., Dai, D., and Gool, L.V. (November, January 27). Guided curriculum model adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation. Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea. 5. Tsai, Y.H., Hung, W.C., Schulter, S., Sohn, K., Yang, M.H., and Chandraker, M. (2018, January 18–23). Learning to adapt structured output space for semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.
|
|