Abstract
AbstractT cell receptors (TCR) recognize antigens on the surface of T cells, which is the critical event in the adaptive immune response to infection and vaccination. The ability to determine TCR-antigen recognition would benefit research in basic immunology and therapeutics. High-throughput experimental approaches for determining TCR-antigen specificity have produced valuable data, but the TCR-antigen pairing space is astronomically more significant than what can reached by experiments. Here, we describe a computational method for predicting TCR-antigen recognition, SABRE (Self-Attention-based Transformer Model for predicting T-cell Receptor-Epitope specificity). SABRE captures sequence properties of matching TCR and antigen pairs by selfsupervised pre-training using known pairs from curated databases and large-scale experiments. It then fine-tunes by supervised learning to predict TCRs that can recognize each antigen. We showed that SABRE’s AUROC reaches 0.726 ± 0.008 for predicting TCR-epitope recognition. We meticulously designed a training and testing scheme to evaluate the model’s performance on unseen TCR species: 60% of the data was allocated for training, 20% for validation, and the remaining 20% exclusively for testing. Notably, this testing set comprised entirely of TCRs not present in the training phase, ensuring a genuine assessment of the model’s ability to generalize to novel data.
Publisher
Cold Spring Harbor Laboratory
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献