Abstract
ABSTRACTIn this computational study, we introduce “hint token learning,” a novel machine learning approach designed to enhance protein language modeling. This method effectively addresses the unique challenges of protein mutational datasets, characterized by highly similar inputs that may differ by only a single token. Our research highlights the superiority of hint token learning over traditional fine-tuning methods through three distinct case studies. We first developed a highly accurate free energy of folding model using the largest protein stability dataset to date. Then, we applied hint token learning to predict a biophysical attribute, the brightness of green fluorescent protein mutants. In our third case, hint token learning was utilized to assess the impact of mutations on RecA bioactivity. These diverse applications collectively demonstrate the potential of hint token learning for improving protein language modeling across general and specific mutational datasets. To facilitate broader use, we have integrated our protein language models into the HuggingFace ecosystem for downstream, mutational fine-tuning tasks.
Publisher
Cold Spring Harbor Laboratory
Reference30 articles.
1. Science in the age of large language models
2. Yin, S. et al. A Survey on Multimodal Large Language Models. J. Artif. Intell. Res. 56 (2023).
3. Zhao, W. X. et al. A Survey of Large Language Models. arXiv preprint 2303 (2023).
4. Wolf, T. et al. in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations 38-45 (Association for Computational Linguistics, 2020).
5. Recent Trends in Deep Learning Based Natural Language Processing [Review Article]