Affiliation:
1. School of Software, East China Jiaotong University, Nanchang 330013, China
2. School of Cyber Science and Engineering, Wuhan University, Wuhan 430072, China
Abstract
Material images are susceptible to changes, depending on the light intensity, visual angle, shooting distance, and other conditions. Feature learning has shown great potential for addressing this issue. However, the knowledge achieved using a simple feature fusion method is insufficient to fully represent the material images. In this study, we aimed to exploit the diverse knowledge learned by a novel progressive feature fusion method to improve the recognition performance. To obtain implicit cross-modal knowledge, we perform early feature fusion and capture the cluster canonical correlations among the state-of-the-art (SOTA) heterogeneous squeeze-and-excitation network (SENet) features. A set of more discriminative deep-level visual semantics (DVSs) is obtained. We then perform gene selection-based middle feature fusion to thoroughly exploit the feature-shared knowledge among the generated DVSs. Finally, any type of general classifier can use the feature-shared knowledge to perform the final material recognition. Experimental results on two public datasets (Fabric and MattrSet) showed that our method outperformed other SOTA baseline methods in terms of accuracy and real-time efficiency. Even most traditional classifiers were able to obtain a satisfactory performance using our method, thus demonstrating its high practicality.
Funder
National Natural Science Foundation of China
Subject
Computer Science Applications,Software
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献