1. Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and Gomez, Aidan N and Kaiser, \L ukasz and Polosukhin, Illia (2017) Attention is All you Need. 10.48550/ARXIV.1706.03762, 30, 5998--6008, I. Guyon and U. Von Luxburg and S. Bengio and H. Wallach and R. Fergus and S. Vishwanathan and R. Garnett, Advances in Neural Information Processing Systems
2. Jacob Devlin and Ming-Wei Chang and Kenton Lee and Kristina Toutanova (2019) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv abs/1810.04805 https://doi.org/10.48550/ARXIV.1810.04805
3. Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel (2018) {GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. 353--355, 10.18653/v1/W18-5446, November, Proceedings of the 2018 {EMNLP} Workshop {B}lackbox{NLP}: Analyzing and Interpreting Neural Networks for {NLP}
4. Alexander Kolesnikov and Alexey Dosovitskiy and Dirk Weissenborn and Georg Heigold and Jakob Uszkoreit and Lucas Beyer and Matthias Minderer and Mostafa Dehghani and Neil Houlsby and Sylvain Gelly and Thomas Unterthiner and Xiaohua Zhai (2021) An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. 10.48550/ARXIV.2010.11929, 9th International Conference on Learning Representations, {ICLR} 2021
5. Maithra Raghu and Thomas Unterthiner and Simon Kornblith and Chiyuan Zhang and Alexey Dosovitskiy (2021) Do Vision Transformers See Like Convolutional Neural Networks?. CoRR abs/2108.08810