Affiliation:
1. Department of Computer Science , Purdue University , USA .
Abstract
Abstract
There is increasing awareness of the need to protect individual privacy in the training data used to develop machine learning models. Differential Privacy is a strong concept of protecting individuals. Naïve Bayes is a popular machine learning algorithm, used as a baseline for many tasks. In this work, we have provided a differentially private Naïve Bayes classifier that adds noise proportional to the smooth sensitivity of its parameters. We compare our results to Vaidya, Shafiq, Basu, and Hong [1] which scales noise to the global sensitivity of the parameters. Our experimental results on real-world datasets show that smooth sensitivity significantly improves accuracy while still guaranteeing ɛ-differential privacy.
Reference23 articles.
1. [1] J. Vaidya, B. Shafiq, A. Basu, and Y. Hong, “Differentially private naive bayes classification,” in 2013 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), vol. 1, pp. 571–576, IEEE, 2013.
2. [2] C. Dwork, F. McSherry, K. Nissim, and A. Smith, “Calibrating noise to sensitivity in private data analysis,” in Theory of cryptography conference, pp. 265–284, Springer, 2006.
3. [3] F. McSherry and K. Talwar, “Mechanism design via differential privacy.,” in FOCS, vol. 7, pp. 94–103, 2007.
4. [4] G. Jagannathan, K. Pillaipakkamnatt, and R. N. Wright, “A practical differentially private random decision tree classifier,” in 2009 IEEE International Conference on Data Mining Workshops, pp. 114–121, IEEE, 2009.
5. [5] B. I. Rubinstein, P. L. Bartlett, L. Huang, and N. Taft, “Learning in a large function space: Privacy-preserving mechanisms for svm learning,” arXiv preprint arXiv:0911.5708, 2009.
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献