1. A New Backdoor Attack in CNNS by Training Set Corruption Without Label Poisoning
2. Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
3. Nicholas Carlini and David A. Wagner . 2017. Towards Evaluating the Robustness of Neural Networks . In Proceedings of the IEEE Symposium on Security and Privacy (S&P). 39--57 . Nicholas Carlini and David A. Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In Proceedings of the IEEE Symposium on Security and Privacy (S&P). 39--57.
4. Ting Chen , Simon Kornblith , Mohammad Norouzi , and Geoffrey Hinton . 2020 a. A simple framework for contrastive learning of visual representations . In Proceedings of the International Conference on Machine Learning (ICML). 1597--1607 . Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020a. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning (ICML). 1597--1607.
5. Ting Chen , Simon Kornblith , Kevin Swersky , Mohammad Norouzi , and Geoffrey E Hinton . 2020 b. Big self-supervised models are strong semi-supervised learners . In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS). 22243--22255 . Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E Hinton. 2020b. Big self-supervised models are strong semi-supervised learners. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS). 22243--22255.