1. [1] Chaabane Abdelberi, Gergely Ács, and Mohamed Ali Kâafar. You are what you like! Information leakage through users’ interests. In Proceedings of the 19th Annual Network and Distributed System Security Symposium (NDSS), 2012.
2. [2] Scott Alfeld, Xiaojin Zhu, and Paul Barford. Data poisoning attacks against autoregressive models. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI), pages 1452–1458, 2016.
3. [3] Battista Biggio, Giorgio Fumera, and Fabio Roli. Security evaluation of pattern classifiers under attack. IEEE Transactions on Knowledge and Data Engineering, 26(4):984–996, 2014.
4. [4] Battista Biggio, Blaine Nelson, and Pavel Laskov. Poisoning attacks against support vector machines. In Proceedings of the 29th International Conference on Machine Learning (ICML), 2012.
5. [5] Vincent Bindschaedler, Reza Shokri, and Carl A. Gunter. Plausible deniability for privacy-preserving data synthesis. Proceedings of the VLDB Endowment, 10(5):481–492, 2017.