ProtTrans: Towards Cracking the Language of Life’s Code Through Self-Supervised Learning

Author:

Elnaggar Ahmed,Heinzinger Michael,Dallago ChristianORCID,Rehawi Ghalia,Wang Yu,Jones Llion,Gibbs Tom,Feher Tamas,Angerer Christoph,Steinegger Martin,Bhowmik DebsindhuORCID,Rost Burkhard

Abstract

AbstractComputational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models taken from NLP. These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive models (Transformer-XL, XLNet) and four auto-encoder models (BERT, Albert, Electra, T5) on data from UniRef and BFD containing up to 393 billion amino acids. The LMs were trained on the Summit supercomputer using 5616 GPUs and TPU Pod up-to 1024 cores.Dimensionality reduction revealed that the raw protein LM-embeddings from unlabeled data captured some biophysical features of protein sequences. We validated the advantage of using the embeddings as exclusive input for several subsequent tasks. The first was a per-residue prediction of protein secondary structure (3-state accuracy Q3=81%-87%); the second were per-protein predictions of protein sub-cellular localization (ten-state accuracy: Q10=81%) and membrane vs. water-soluble (2-state accuracy Q2=91%). For the per-residue predictions the transfer of the most informative embeddings (ProtT5) for the first time outperformed the state-of-the-art without using evolutionary information thereby bypassing expensive database searches. Taken together, the results implied that protein LMs learned some of the grammar of the language of life. To facilitate future work, we released our models at https://github.com/agemagician/ProtTrans.

Publisher

Cold Spring Harbor Laboratory

Reference101 articles.

1. J. Wells , B. Bland et al., “Announcing Supercomputer Summit,” Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States), Tech. Rep., Jun. 2016.

2. N.P. Jouppi , C. Young et al., “In-Datacenter Performance Analysis of a Tensor Processing Unit,” in Proceedings of the 44th Annual International Symposium on Computer Architecture, ser. ISCA ‘17. Toronto, ON, Canada: Association for Computing Machinery, Jun. 2017, pp. 1–12.

3. A. Paszke , S. Gross et al., “PyTorch: An Imperative Style, High-Performance Deep Learning Library,” in Advances in Neural Information Processing Systems 32, H. Wallach , H. Larochelle et al. , Eds. Curran Associates, Inc., 2019, pp. 8026–8037.

4. D. Kirk , “NVIDIA cuda software and gpu parallel computing architecture,” in Proceedings of the 6th International Symposium on Memory Management, ser. ISMM ‘07. Montreal, Quebec, Canada: Association for Computing Machinery, Oct. 2007, pp. 103–104.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3