Affiliation:
1. School of Computing, Macquarie University, Australia
2. Inria and Institut Polytechnique de Paris, France
3. Data61, CSIRO, Australia
Abstract
We study the privacy-utility trade-off in the context of metric differential privacy. Ghosh et al. introduced the idea of universal optimality to characterise the “best” mechanism for a certain query that simultaneously satisfies (a fixed) ε-differential privacy constraint whilst at the same time providing better utility compared to any other ε-differentially private mechanism for the same query. They showed that the Geometric mechanism is universally optimal for the class of counting queries. On the other hand, Brenner and Nissim showed that outside the space of counting queries, and for the Bayes risk loss function, no such universally optimal mechanisms exist. Except for the universal optimality of the Laplace mechanism, there have been no generalisations of these universally optimal results to other classes of differentially-private mechanisms. In this paper, we use metric differential privacy and quantitative information flow as the fundamental principle for studying universal optimality. Metric differential privacy is a generalisation of both standard (i.e., central) differential privacy and local differential privacy, and it is increasingly being used in various application domains, for instance in location privacy and in privacy-preserving machine learning. Similar to the approaches adopted by Ghosh et al. and Brenner and Nissim, we measure utility in terms of loss functions, and we interpret the notion of a privacy mechanism as an information-theoretic channel satisfying constraints defined by ε-differential privacy and a metric meaningful to the underlying state space. Using this framework we are able to clarify Nissim and Brenner’s negative results by (a) that in fact all privacy types contain optimal mechanisms relative to certain kinds of non-trivial loss functions, and (b) extending and generalising their negative results beyond Bayes risk specifically to a wide class of non-trivial loss functions. Our exploration suggests that universally optimal mechanisms are indeed rare within privacy types. We therefore propose weaker universal benchmarks of utility called privacy type capacities. We show that such capacities always exist and can be computed using a convex optimisation algorithm. Further, we illustrate these ideas on a selection of examples with several different underlying metrics.
Subject
Computer Networks and Communications,Hardware and Architecture,Safety, Risk, Reliability and Quality,Software
Reference25 articles.
1. J. Acharya, K. Bonawitz, P. Kairouz, D. Ramage and Z. Sun, Context aware local differential privacy, in: International Conference on Machine Learning, PMLR, 2020, pp. 52–62.
2. On the Relation between Differential Privacy and Quantitative Information Flow
3. M.S. Alvim, K. Chatzikokolakis, A. McIver, C. Morgan, C. Palamidessi and G. Smith, The Science of Quantitative Information Flow, Springer, 2019.
4. Measuring Information Leakage Using Generalized Gain Functions
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献