Abstract
AbstractClosing the gap between measurable genetic information and observable traits is a longstand-ing challenge in genomics. Yet, the prediction of molecular phenotypes from DNA sequences alone remains limited and inaccurate, often driven by the scarcity of annotated data and the inability to transfer learnings between prediction tasks. Here, we present an extensive study of foundation models pre-trained on DNA sequences, named the Nucleotide Transformer, rang-ing from 50M up to 2.5B parameters and integrating information from 3,202 diverse human genomes, as well as 850 genomes selected across diverse phyla, including both model and non-model organisms. These transformer models yield transferable, context-specific representations of nucleotide sequences, which allow for accurate molecular phenotype prediction even in low-data settings. We show that the developed models can be fine-tuned at low cost and despite low available data regime to solve a variety of genomics applications. Despite no supervision, the transformer models learned to focus attention on key genomic elements, including those that regulate gene expression, such as enhancers. Lastly, we demonstrate that utilizing model rep-resentations can improve the prioritization of functional genetic variants. The training and ap-plication of foundational models in genomics explored in this study provide a widely applicable stepping stone to bridge the gap of accurate molecular phenotype prediction from DNA sequence. Code and weights available at:https://github.com/instadeepai/nucleotide-transformerin Jax andhttps://huggingface.co/InstaDeepAIin Pytorch. Example notebooks to apply these models to any downstream task are available on HuggingFace.
Publisher
Cold Spring Harbor Laboratory
Reference60 articles.
1. J. Devlin , M.-W. Chang , K. Lee , and K. Toutanova , “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
2. Language models are few-shot learners;Advances in neural infor-mation processing systems,2020
3. Highly accurate protein structure prediction with AlphaFold
4. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences
5. A. Elnaggar , M. Heinzinger , C. Dallago , G. Rihawi , Y. Wang , L. Jones , T. Gibbs , T. Feher , C. Angerer , M. Steinegger , et al., “Prottrans: towards cracking the language of life’s code through self-supervised deep learning and high performance computing,” arXiv preprint arXiv:2007.06225, 2020.
Cited by
57 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献