Abstract
AbstractAccurate prediction of compound-protein interaction (CPI) is of great importance for drug discovery. For creating generalizable CPI prediction deep learning (DL) models, the expansion of CPI data through experimental validation is crucial. However, the cost associated with these experimental validations is a bottleneck. Recently developed large language models (LLMs) such as chemical language models (CLMs) and protein language models (PLMs) have emerged as foundation models, demonstrating high generalization performance in various tasks involving compounds and proteins. Inspired by this, we propose a chemical-genomics language model, ChemGLaM, for predicting compound-protein interactions. ChemGLaM is based on the 2 independent language models, MoLFormer for compounds and ESM-2 for proteins, and fine-tuned for the CPI datasets using an interaction block with a cross-attention mechanism. ChemGLaM is capable of predicting interactions between unknown compounds and proteins with higher accuracy than existing CPI prediction models, demonstrating that combining the independently pre-trained foundation models is effective for obtaining sophisticated representation of compound-protein interactions. Furthermore, visualizing the learned cross-attention map can offer explainable insights into the mechanism of compound-protein interaction. This study emphasizes the potential of integrating the independent foundation models for the tasks of multi-modality such as CPI prediction.
Publisher
Cold Spring Harbor Laboratory