Author:
Lu Amy X.,Yan Wilson,Yang Kevin K.,Gligorijevic Vladimir,Cho Kyunghyun,Abbeel Pieter,Bonneau Richard,Frey Nathan
Abstract
AbstractExisting protein machine learning representations typically model either the sequence or structure distribution, with the other modality implicit. The latent space of sequence-to-structure prediction models such as ESMFold represents thejoint distributionof sequence and structure; however, we find these embeddings to exhibit massive activations, whereby some channels have values 3000× higher than others, regardless of the input. Further, on continuous compression schemes, ESMFold embeddings can be reduced by a factor of 128× along the channel and 8× along the length, while retaining structure information at <2Å scale accuracy, and performing competitively on protein function and localization benchmarks. On discrete compression schemes, we construct a tokenized all-atom structure vocabulary that retains high reconstruction accuracy, thus introducing atokenized representation of all-atom structure that can be obtained from sequence alone. We term this series of embeddings as CHEAP (Compressed Hourglass Embedding Adaptations of Proteins) embeddings, obtained via the HPCT (Hourglass Protein Compression Transformer) architecture. CHEAP is a compact representation of both protein structure and sequence, sheds light on information content asymmetries between sequence and structure, democratizes representations captured by large models, and is designed to have flexible downstream applications such as generation, search, and prediction.
Publisher
Cold Spring Harbor Laboratory
Reference45 articles.
1. Estimating or propagating gradients through stochastic neurons for conditional computation;arXiv preprint,2013
2. The Protein Data Bank
3. Ron Boger , Amy Lu , Seyone Chithrananda , Kevin Yang , Petr Skopintsev , Ben Adler , Eric Wallace , Peter Yoon , Pieter Abbeel , and Jennifer Doudna . Toph (true retrieval of proteins homologs): Adapting a contrastive question-answering framework for protein search.
4. On the opportunities and risks of foundation models;arXiv preprint,2021
5. Huiwen Chang , Han Zhang , Lu Jiang , Ce Liu , and William T Freeman . Maskgit: Masked generative image transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11315–11325, 2022.