Abstract
AbstractProtein language models (pLMs) transform their input into a sequence of hidden representations whose geometric behavior changes across layers. Looking at fundamental geometric properties such as the intrinsic dimension and the neighbor composition of these representations, we observe that these changes highlight a pattern characterized by three distinct phases. This phenomenon emerges across many models trained on diverse datasets, thus revealing a general computational strategy learned by pLMs to reconstruct missing parts of the data. These analyses show the existence of low-dimensional maps that encode evolutionary and biological properties such as remote homology and structural information. Our geometric approach sets the foundations for future systematic attempts to understand thespaceof protein sequences with representation learning techniques.
Publisher
Cold Spring Harbor Laboratory
Reference26 articles.
1. Mohammed AlQuraishi . ProteinNet: a standardized data set for machine learning of protein structure. BMC Bioinformatics, 20, 2019.
2. Alessio Ansuini , Alessandro Laio , Jakob H Macke , and Davide Zoccolan . Intrinsic dimension of data representations in deep neural networks. Advances in Neural Information Processing Systems, 32, 2019.
3. SCOPe: improvements to the structural classification of proteins – extended database to facilitate variant interpretation and machine learning
4. N.S. Detlefsen , S. Hauberg , and W. Boomsma . Learning meaningful representations of protein sequences. Nature Communications, 13, 2022.
5. Bert: Pre-training of deep bidirectional transformers for language understanding;Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,2019