Abstract
ABSTRACTProtein engineers aim to discover and design novel sequences with targeted, desirable properties. Given the near limitless size of the protein sequence landscape, it is no surprise that these desirable sequences are often a relative rarity. This makes identifying such sequences a costly and time-consuming endeavor. In this work, we show how to use a deep Transformer Protein Language Model to identify sequences that have the mostpromise. Specifically, we use the model’s self-attention map to calculate a PROMISE SCORE that weights the relative importance of a given sequence according to predicted interactions with a specified binding partner. This PROMISE SCORE can then be used to identify strong binders worthy of further study and experimentation. We use the PROMISE SCORE within two protein engineering contexts— Nanobody (Nb) discovery and protein optimization. With Nb discovery, we show how the PROMISE SCORE provides an effective way to select lead sequences from Nb repertoires. With protein optimization, we show how to use the PROMISE SCORE to select site-specific mutagenesis experiments that identify a high percentage of improved sequences. In both cases, we also show how the self-attention map used to calculate the PROMISE SCORE can indicate which regions of a protein are involved in intermolecular interactions that drive the targeted property. Finally, we describe how to fine-tune the Transformer Protein Language Model to learn a predictive model for the targeted property, and discuss the capabilities and limitations of fine-tuning with and without knowledge transfer within the context of protein engineering.
Publisher
Cold Spring Harbor Laboratory
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献