Affiliation:
1. Nanyang Technological University, Singapore 639798, Singapore;
2. University of Texas at Dallas, Richardson, Texas 75080
Abstract
The rich data used to train learning models increasingly tend to be distributed and private. It is important to efficiently perform learning tasks without compromising individual users’ privacy even considering untrusted learning applications and, furthermore, understand how privacy-preservation mechanisms impact the learning process. To address the problem, we design a differentially private distributed algorithm based on the stochastic variance reduced gradient (SVRG) algorithm, which prevents the learning server from accessing and inferring private training data with a theoretical guarantee. We quantify the impact of the adopted privacy-preservation measure on the learning process in terms of convergence rate, by which it indicates noises added at each gradient update results in a bounded deviation from the optimum. To further evaluate the impact on the trained models, we compare the proposed algorithm with SVRG and stochastic gradient descent using logistic regression and neural nets. The experimental results on benchmark data sets show that the proposed algorithm has minor impact on the accuracy of trained models under a moderate amount of privacy budget.
Publisher
Institute for Operations Research and the Management Sciences (INFORMS)
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献