1. Abe, F., Josh, A.: Understanding self-supervised and contrastive learning with bootstrap your own latent (BYOL). https://untitled-ai.github.io/understanding-self-supervised-contrastive-learning.html
2. Arora, S., Khandeparkar, H., Khodak, M., Plevrakis, O., Saunshi, N.: A theoretical analysis of contrastive unsupervised representation learning. arXiv preprint arXiv:1902.09229 (2019)
3. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: Proceedings of Machine Learning and Systems 2020, pp. 10719–10729 (2020)
4. Chi, Z., et al.: InfoXLM: an information-theoretic framework for cross-lingual language model pre-training. arXiv preprint arXiv:2007.07834 (2020)
5. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL, pp. 4171–4186. Minneapolis, Minnesota, June 2019. https://doi.org/10.18653/v1/N19-1423