Affiliation:
1. Inner Mongolia Key Laboratory of Wireless Networking and Mobile Computing, Inner Mongolia University, Hohhot, China
2. School of Computer & Information Engineering, Henan University of Economics and Law, Zhengzhou, China
Abstract
Domain adaptation is a viable solution for deep learning with small data. However, domain adaptation models trained on data with sensitive information may be a violation of personal privacy. In this article, we proposed a solution for unsupervised domain adaptation, called DP-CUDA, which is based on differentially private gradient projection and contradistinguisher. Compared with the traditional domain adaptation process, DP-CUDA involves searching for domain-invariant features between the source domain and target domain first and then transferring knowledge. Specifically, the model is trained in the source domain by supervised learning from labeled data. During the training of the target model, feature learning is used to solve the classification task in an end-to-end manner using unlabeled data directly, and the differentially private noise is injected into the gradient. We conducted extensive experiments on a variety of benchmark datasets, including MNIST, USPS, SVHN, VisDA-2017, Office-31, and Amazon Review, to demonstrate our proposed method’s utility and privacy-preserving properties.
Funder
Science and Technology Major Project of Inner Mongolia
Subject
Artificial Intelligence,Human-Computer Interaction,Theoretical Computer Science,Software
Reference59 articles.
1. Machine learning models that remember too much;C. Song,2017
2. Model inversion attacks that exploit confidence information and basic countermeasures;M. Fredrikson
3. Facebook in privacy breach;E. Steel;The Wall Street Journal,2010
4. Corporate directors’ and officers’ cybersecurity standard of care: the Yahoo data breach;L. J. Trautman;American University Law Review,2016
5. The EU General Data Protection Regulation (GDPR)