Abstract
Alternating Direction Method of Multipliers (ADMM) is a widely used machine learning tool in distributed environments. In the paper, we propose an ADMM-based differential privacy learning algorithm (FDP-ADMM) on penalized quantile regression for distributed functional data. The FDP-ADMM algorithm can resist adversary attacks to avoid the possible privacy leakage in distributed networks, which is designed by functional principal analysis, an approximate augmented Lagrange function, ADMM algorithm, and privacy policy via Gaussian mechanism with time-varying variance. It is also a noise-resilient, convergent, and computationally effective distributed learning algorithm, even if for high privacy protection. The theoretical analysis on privacy and convergence guarantees is derived and offers a privacy–utility trade-off: a weaker privacy guarantee would result in better utility. The evaluations on simulation-distributed functional datasets have demonstrated the effectiveness of the FDP-ADMM algorithm even if under high privacy guarantee.
Funder
Chinese National Social Science Fund
National Natural Science Foundation of China
Subject
General Mathematics,Engineering (miscellaneous),Computer Science (miscellaneous)
Reference55 articles.
1. Our Data, Ourselves: Privacy Via Distributed Noise Generation;Dwork;Proceedings of the Advances in Cryptology—EUROCRYPT 2006,2006
2. Calibrating Noise to Sensitivity in Private Data Analysis;Dwork,2006
3. Boosting and Differential Privacy;Dwork,2010
4. The Algorithmic Foundations of Differential Privacy
5. Robust Traceability from Trace Amounts;Dwork;Proceedings of the 2015 IEEE 56th Annual Symposium on Foundations of Computer Science,2015
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Distributed Quantile Regression with Non-Convex Sparse Penalties;2023 IEEE Statistical Signal Processing Workshop (SSP);2023-07-02