FDDS: Feature Disentangling and Domain Shifting for Domain Adaptation
-
Published:2023-07-05
Issue:13
Volume:11
Page:2995
-
ISSN:2227-7390
-
Container-title:Mathematics
-
language:en
-
Short-container-title:Mathematics
Author:
Chen Huan1, Gao Farong12ORCID, Zhang Qizhong12ORCID
Affiliation:
1. HDU-ITMO Joint Institute, Hangzhou Dianzi University, Hangzhou 310018, China 2. School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China
Abstract
Domain adaptation is a learning strategy that aims to improve the performance of models in the current field by leveraging similar domain information. In order to analyze the effects of feature disentangling on domain adaptation and evaluate a model’s suitability in the original scene, we present a method called feature disentangling and domain shifting (FDDS) for domain adaptation. FDDS utilizes sample information from both the source and target domains, employing a non-linear disentangling approach and incorporating learnable weights to dynamically separate content and style features. Additionally, we introduce a lightweight component known as the domain shifter into the network architecture. This component allows for classification performance to be maintained in both the source and target domains while consuming moderate overhead. The domain shifter uses the attention mechanism to enhance the ability to extract network features. Extensive experiments demonstrated that FDDS can effectively disentangle features with clear feature separation boundaries while maintaining the classification ability of the model in the source domain. Under the same conditions, we evaluated FDDS and advanced algorithms on digital and road scene datasets. In the 19 classification tasks for road scenes, FDDS outperformed the competition in 11 categories, particularly showcasing a remarkable 2.7% enhancement in the accuracy of the bicycle label. These comparative results highlight the advantages of FDDS in achieving high accuracy in the target domain.
Funder
Zhejiang Provincial Natural Science Foundation of China
Subject
General Mathematics,Engineering (miscellaneous),Computer Science (miscellaneous)
Reference54 articles.
1. Effective Training of Convolutional Neural Networks With Low-Bitwidth Weights and Activations;Zhuang;IEEE Transations Pattern Anal. Mach. Intell.,2022 2. Deep learning, reinforcement learning, and world models;Matsuo;Neural Netw.,2022 3. Recht, B., Roelofs, R., Schmidt, L., and Shankar, V. (2019, January 9–15). Do imagenet classifiers generalize to imagenet?. Proceedings of the International Conference on Machine Learning (ICML), Long Beach, CA, USA. 4. Liang, J., Hu, D., and Feng, J. (2021, January 19–25). Domain adaptation with auxiliary target domain-oriented classifier. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nashvile, TN, USA. 5. Mao, C., Jiang, L., Dehghani, M., Vondrick, C., Sukthankar, R., and Essa, I. (2021). Discrete representations strengthen vision transformer robustness. arXiv.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|