1. BERTology meets biology: interpreting attention in protein language models;arXiv preprint,2020
2. Rao, Roshan and Meier, Joshua and Sercu, Tom and Ovchinnikov, Sergey and Rives, Alexander. Transformer protein language models are unsupervised structure learners. Biorxiv, 2020.
3. Prot-Trans: towards cracking the language of Life’s code through self-supervised deep learning and high performance computing;others;arXiv preprint,2020
4. Rives, Alexander and Meier, Joshua and Sercu, Tom and Goyal, Siddharth and Lin, Zeming and Liu, Jason and Guo, Demi and Ott, Myle and Zitnick, C Lawrence and Ma, Jerry and others. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. Proceedings of the National Academy of Sciences, vol. 118, no. 15, 2021.
5. Heinzinger, Michael and Elnaggar, Ahmed and Wang, Yu and Dallago, Christian and Nechaev, Dmitrii and Matthes, Florian and Rost, Burkhard. Modeling aspects of the language of life through transfer-learning protein sequences. BMC bioinformatics, vol. 20, no. 1, 2019.