Abstract
AbstractThe optimal residue identity at each position in a protein is determined by its structural, evolutionary, and functional context. We seek to learn the representation space of the optimal amino-acid residue in different structural contexts in proteins. Inspired by masked language modeling (MLM), our training aims to transduce learning of amino-acid labels from non-masked residues to masked residues in their structural environments and from general (e.g., a residue in a protein) to specific contexts (e.g., a residue at the interface of a protein or antibody complex). Our results on native sequence recovery and forward folding with AlphaFold2 suggest that the amino acid label for a protein residue may be determined from its structural context alone (i.e., without knowledge of the sequence labels of surrounding residues). We further find that the sequence space sampled from our masked models recapitulate the evolutionary sequence neighborhood of the wildtype sequence. Remarkably, the sequences conditioned on highly plastic structures recapitulate the conformational flexibility encoded in the structures. Furthermore, maximum-likelihood interfaces designed with masked models recapitulate wildtype binding energies for a wide range of protein interfaces and binding strengths. We also propose and compare fine-tuning strategies to train models for designing CDR loops of antibodies in the structural context of the antibody-antigen interface by leveraging structural databases for proteins, antibodies (synthetic and experimental) and protein-protein complexes. We show that pretraining on more general contexts improves native sequence recovery for antibody CDR loops, especially for the hypervariable CDR H3, while fine-tuning helps to preserve patterns observed in special contexts.
Publisher
Cold Spring Harbor Laboratory
Reference62 articles.
1. Nijkamp, E. ; Ruffolo, J. ; Weinstein, E. N. ; Naik, N. ; Madani, A. ProGen2: Exploring the Boundaries of Protein Language Models. 2022.
2. Yin, R. ; Feng, B. Y. ; Varshney, A. ; Pierce, B. G. Benchmarking AlphaFold for Protein Complex Modeling Reveals Accuracy Determinants. Protein Sci. 2022, 31 (8). https://doi.org/10.1002/pro.4379.
3. Yin, R. ; Ribeiro-Filho, H. V ; Lin, V. ; Gowthaman, R. ; Cheung, M. ; Pierce, B. G . TCRmodel2: High-Resolution Modeling of T Cell Receptor Recognition Using Deep Learning. Nucleic Acids Res. 2023. https://doi.org/10.1093/nar/gkad356.
4. Shi, Y. ; Huang, Z. ; Feng, S. ; Zhong, H. ; Wang, W. ; Sun, Y. Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification. 2020.
5. Devlin, J. ; Chang, M. W. ; Lee, K. ; Toutanova, K . BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. NAACL HLT 2019 – 2019 Conf. North Am. Chapter Assoc. Comput. Linguist. Hum. Lang. Technol. – Proc. Conf. 2019, 1 (Mlm), 4171– 4186.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献