Semi-TSGAN: Semi-Supervised Learning for Highlight Removal Based on Teacher-Student Generative Adversarial Network
Author:
Zheng Yuanfeng1ORCID, Yan Yuchen1ORCID, Jiang Hao1
Affiliation:
1. School of Electronic Information, Wuhan University, Wuhan 430072, China
Abstract
Despite recent notable advancements in highlight image restoration techniques, the dearth of annotated data and the lightweight deployment of highlight removal networks pose significant impediments to further advancements in the field. In this paper, to the best of our knowledge, we first propose a semi-supervised learning paradigm for highlight removal, merging the fusion version of a teacher–student model and a generative adversarial network, featuring a lightweight network architecture. Initially, we establish a dependable repository to house optimal predictions as pseudo ground truth through empirical analyses guided by the most reliable No-Reference Image Quality Assessment (NR-IQA) method. This method serves to assess rigorously the quality of model predictions. Subsequently, addressing concerns regarding confirmation bias, we integrate contrastive regularization into the framework to curtail the risk of overfitting on inaccurate labels. Finally, we introduce a comprehensive feature aggregation module and an extensive attention mechanism within the generative network, considering a balance between network performance and computational efficiency. Our experimental evaluations encompass comprehensive assessments on both full-reference and non-reference highlight benchmarks. The results demonstrate conclusively the substantive quantitative and qualitative enhancements achieved by our proposed algorithm in comparison to state-of-the-art methodologies.
Reference49 articles.
1. Fu, G., Zhang, Q., Lin, Q., Zhu, L., and Xiao, C. (2020, January 12–16). Learning to detect specular highlights from real-world images. Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA. 2. Joseph, K., Khan, S., Khan, F.S., and Balasubramanian, V.N. (2021, January 20–25). Towards open world object detection. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA. 3. Liu, Y., Li, Y., You, S., and Lu, F. (2020, January 14–19). Unsupervised learning for intrinsic image decomposition from a single image. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA. 4. Zeng, F., Dong, B., Zhang, Y., Wang, T., Zhang, X., and Wei, Y. (2022, January 23–27). Motr: End-to-end multiple-object tracking with transformer. Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel. 5. Fu, G., Zhang, Q., Zhu, L., Li, P., and Xiao, C. (2021, January 20–25). A multi-task network for joint specular highlight detection and removal. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.
|
|