Author:
Triastcyn Aleksei,Faltings Boi
Abstract
We consider the problem of enhancing user privacy in common data analysis and machine learning development tasks, such as data annotation and inspection, by substituting the real data with samples from a generative adversarial network. We propose employing Bayesian differential privacy as the means to achieve a rigorous theoretical guarantee while providing a better privacy-utility trade-off. We demonstrate experimentally that our approach produces higher-fidelity samples compared to prior work, allowing to (1) detect more subtle data errors and biases, and (2) reduce the need for real data labelling by achieving high accuracy when training directly on artificial samples.
Subject
Computational Mathematics,Computational Theory and Mathematics,Numerical Analysis,Theoretical Computer Science
Reference38 articles.
1. Model inversion attacks that exploit confidence information and basic countermeasures;Fredrikson;Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security,2015
2. Membership inference attacks against machine learning models;Shokri;Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP),2017
3. Deep models under the GAN: Information leakage from collaborative deep learning;Hitaj;Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security,2017
4. Towards demystifying membership inference attacks;Truex;arXiv,2018
5. Deep learning with differential privacy;Abadi;Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security,2016
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献