1. End-to-end object detection with transformers;Carion,2020
2. TransUNet: Transformers make strong encoders for medical image segmentation;Chen,2021
3. Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q.V., Salakhutdinov, R., 2020. Transformer-XL: Attentive language models beyond a fixed-length context. In: ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference. ISBN: 9781950737482, pp. 2978–2988.
4. Devlin, J., Chang, M.W., Lee, K., Toutanova, K., 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In: NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference, vol. 1. (Mlm), ISBN: 9781950737130, pp. 4171–4186.
5. Person re-identification by enhanced local maximal occurrence representation and generalized similarity metric learning;Dong;Neurocomputing,2018