Affiliation:
1. University of California, Berkeley, CA
Abstract
We study statistical risk minimization problems under a privacy model in which the data is kept confidential even from the learner. In this local privacy framework, we establish sharp upper and lower bounds on the convergence rates of statistical estimation procedures. As a consequence, we exhibit a precise tradeoff between the amount of privacy the data preserves and the utility, as measured by convergence rate, of any statistical estimator or learning procedure.
Funder
U.S. Army Research Laboratory
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Hardware and Architecture,Information Systems,Control and Systems Engineering,Software
Reference53 articles.
1. Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization
2. Mirror descent and nonlinear projected subgradient methods for convex optimization
3. D. P. Bertsekas and J. N. Tsitsiklis. 1989. Parallel and Distributed Computation: Numerical Methods. Prentice-Hall Inc. D. P. Bertsekas and J. N. Tsitsiklis. 1989. Parallel and Distributed Computation: Numerical Methods. Prentice-Hall Inc.
4. P. Billingsley. 1986. Probability and Measure 2nd Ed. Wiley. P. Billingsley. 1986. Probability and Measure 2nd Ed. Wiley.
5. A learning theory approach to non-interactive database privacy
Cited by
103 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献