Bidirectional-Feature-Learning-Based Adversarial Domain Adaptation with Generative Network
-
Published:2023-10-29
Issue:21
Volume:13
Page:11825
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Han Chansu1ORCID, Choo Hyunseung2, Jeong Jongpil3ORCID
Affiliation:
1. Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon-si 16419, Republic of Korea 2. Department of AI System Engineering, Sungkyunkwan University, Suwon-si 16419, Republic of Korea 3. Department of Smart Factory Convergence, Sungkyunkwan University, Suwon-si 16419, Republic of Korea
Abstract
Studying domain adaptation is a recent research trend. Generally, many generative models that researchers have studied perform well on training data from a specific domain. However, their ability to be generalized to other domains might be limited. Therefore, a growing body of research has utilized domain adaptation techniques to address the problem of generative models being vulnerable to input from other domains. In this paper, we focused on generative models and representation learning. Generative models have received a lot of attention for their ability to generate various types of data such as images, music, and text. In particular, studies utilizing generative adversarial neural networks (GANs) and autoencoder structures have received a lot of attention. In this paper, we solved the domain adaptation problem by reconstructing real image data using an autoencoder structure. In particular, reconstructed image data, considered a type of noisy image data, are used as input data. How to reconstruct data by extracting features and selectively transforming them in order to reduce differences in characteristics between domains entails representative learning. Considering these research trends, this paper proposed a novel methodology combining bidirectional feature learning and generative networks to innovatively approach the domain adaptation problem. It could improve the adaptation ability by accurately simulating the real data distribution. The experimental results show that the proposed model outperforms the traditional DANN and ADDA. This demonstrates that combining bidirectional feature learning and generative networks is an effective solution in the field of domain adaptation. These results break new ground in the field of domain adaptation. They are expected to provide great inspiration for future research and applications. Finally, through various experiments and evaluations, we verify that the proposed approach outperforms the existing works. We conducted experiments for representative generative models and domain adaptation techniques and found that the proposed approach was effective in improving data and domain robustness. We hope to contribute to the development of domain-adaptive models that are robust to the domain.
Funder
SungKyunKwan University BK21 FOUR Ministry of Education National Research Foundation of Korea
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference32 articles.
1. Domain-adversarial training of neural networks;Ganin;J. Mach. Learn. Res.,2016 2. Ganin, Y., and Lempitsky, V. (2015, January 6–11). Unsupervised domain adaptation by backpropagation. Proceedings of the 32nd International Conference on International Conference on Machine Learning-ICML’15, Lille, France. 3. Wang, J., Chen, Y., Hao, S., Feng, W., and Shen, Z. (2017, January 18–21). Balanced Distribution Adaptation for Transfer Learning. Proceedings of the 2017 IEEE International Conference on Data Mining (ICDM), New Orleans, LA, USA. 4. Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., and Darrell, T. (2014). Deep Domain Confusion: Maximizing for Domain Invariance. arXiv. 5. Ma, S., Gao, S.-H., and Gao, Y. (2021, January 20–25). Baochang Zhang End-to-End Label-Constraint Adaptation for Adversarial Domain Alignment. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|