1. Brown, T., Mann, B., Ryder, N., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)
2. Tramèr, F., Zhang, F., Juels, A., et al.: Stealing machine learning models via prediction {APIs}. In: 25th USENIX Security Symposium (USENIX Security 2016), pp. 601–618 (2016)
3. Shen, S., Tople, S., Saxena, P.: Auror: defending against poisoning attacks in collaborative deep learning systems. In: Proceedings of the 32nd Annual Conference on Computer Security Applications, pp. 508–519 (2016)
4. Barreno, M., Nelson, B., Sears, R., et al.: Can machine learning be secure? Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security, pp. 16–25 (2006)
5. Gu, T., Dolan-Gavitt, B., Garg, S.: BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain (2017). arXiv preprint arXiv:1708.06733