1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding;Burstein;Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019,2019
2. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv, Available online: http://arxiv.org/abs/2010.11929.
3. Li, A., Guo, J., Yang, H., Salim, F.D., and Chen, Y. (2021, January 18–21). DeepObfuscator: Obfuscating Intermediate Representations with Privacy-Preserving Adversarial Learning on Smartphones. Proceedings of the IoTDI’21: International Conference on Internet-of-Things Design and Implementation, Charlottesville, VA, USA.
4. Ribeiro, M., Grolinger, K., and Capretz, M.A. (2015, January 9–11). MLaaS: Machine Learning as a Service. Proceedings of the 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), Miami, FL, USA.
5. Emergence of Invariance and Disentanglement in Deep Representations;Achille;J. Mach. Learn. Res.,2018