Towards accurate knowledge transfer via target-awareness representation disentanglement
-
Published:2023-12-12
Issue:2
Volume:113
Page:699-723
-
ISSN:0885-6125
-
Container-title:Machine Learning
-
language:en
-
Short-container-title:Mach Learn
Author:
Li XingjianORCID, Hu Di, Li Xuhong, Xiong Haoyi, Xu Chengzhong, Dou Dejing
Abstract
AbstractFine-tuning deep neural networks pre-trained on large scale datasets is one of the most practical transfer learning paradigm given limited quantity of training samples. To obtain better generalization, using the starting point as the reference (SPAR), either through weights or features, has been successfully applied to transfer learning as a regularizer. However, due to the domain discrepancy between the source and target task, there exists obvious risk of negative transfer in a straightforward manner of knowledge preserving. In this paper, we propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement ($$\textrm{TRED}$$
TRED
), where the relevant knowledge with respect to the target task is disentangled from the original source model and used as a regularizer during fine-tuning the target model. Two alternative approaches, maximizing Maximum Mean Discrepancy (Max-MMD) and minimizing mutual information (Min-MI) are introduced to achieve the desired disentanglement. Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average. $$\textrm{TRED}$$
TRED
also outperforms related state-of-the-art transfer learning regularizers such as $$\mathrm {L^2\text {-}SP}$$
L
2
-
SP
, $$\textrm{AT}$$
AT
, $$\textrm{DELTA}$$
DELTA
, and $$\textrm{BSS}$$
BSS
. Moreover, our solution is compatible with different choices of disentangling strategies. While the combination of Max-MMD and Min-MI typically achieves higher accuracy, only using Max-MMD can be a preferred choice in applications with low resource budgets.
Funder
Carnegie Mellon University
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Software
Reference65 articles.
1. Arbel, M., Sutherland, D., Bińkowski, M., & Gretton, A. (2018). On gradient regularizers for mmd gans. In Advances in Neural Information Processing Systems, pp. 6700–6710. 2. Aytar, Y., & Zisserman, A. (2011). Tabula rasa: Model transfer for object category detection. In 2011 International Conference on Computer Vision, pp. 2252–2259. IEEE. 3. Belghazi, M.I., Baratin, A., Rajeswar, S., Ozair, S., Bengio, Y., Courville, A., & Hjelm, R.D. (2018). Mine: mutual information neural estimation. arXiv preprint arXiv:1801.04062. 4. Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798–1828. 5. Bengio, Y. (2012). Deep learning of representations for unsupervised and transfer learning. In Proceedings of ICML Workshop on Unsupervised and Transfer Learning, pp. 17–36.
|
|