Affiliation:
1. Ocean College Zhejiang University Zhoushan China
2. Hainan Institute Zhejiang University Sanya China
3. School of Mechanical Engineering Zhejiang University Hangzhou China
4. College of Electrical Engineering Zhejiang University Hangzhou China
Abstract
AbstractThe discrepancy in data distribution between training and testing scenarios, as well as the inductive bias of convolutional neural networks towards image styles, reduces the model's generalization ability. Many unsupervised domain generalization methods based on feature decoupling suffer from an initial neglect of explicit decoupling of content and style features, resulting in content features that still contain considerable redundant information, thereby restricting improvements in generalization capability. To tackle this problem, this paper optimizes the learning process of domain‐invariant (content) features into an information compression issue, minimizing redundancy in content features. Furthermore, to enhance decoupled learning, this paper introduces innovative cross‐domain loss functions and image reconstruction modules that explicitly decouple and merge content and style across different domains. Extensive experiments demonstrate the method's significant enhancements over recent cutting‐edge approaches.
Funder
National Natural Science Foundation of China
Publisher
Institution of Engineering and Technology (IET)