Author:
Edwards Carl,Naik Aakanksha,Khot Tushar,Burke Martin,Ji Heng,Hope Tom
Abstract
AbstractPredicting synergistic drug combinations can help accelerate discovery of cancer treatments, particularly therapies personalized to a patient’s specific tumor via biopsied cells. In this paper, we propose a novel setting and models forin-context drug synergy learning. We are given a small “personalized dataset” of 10-20 drug synergy relationships in the context of specific cancer cell targets. Our goal is to predict additional drug synergy relationships in that context. Inspired by recent work that pre-trains a GPT language model (LM) to “in-context learn” common function classes, we devise novel pre-training schemes that enable a GPT model to in-context learn “drug synergy functions”. Our model—which does not use any textual corpora, molecular fingerprints, protein interaction or any other domain-specific knowledge— is able to achieve competitive results. We further integrate our in-context approach with a genetic algorithm to optimize model prompts and select synergy candidates to test after conducting a patient biopsy. Finally, we explore a novel task of inverse drug design which can potentially enable the design of drugs that synergize specifically to target a given patient’s “personalized dataset”. Our findings can potentially have an important impact on precision cancer medicine, and also raise intriguing questions on non-textual pre-training for LMs.
Publisher
Cold Spring Harbor Laboratory
Reference98 articles.
1. Walid Ahmad , Elana Simon , Seyone Chithrananda , Gabriel Grand , and Bharath Ramsundar . Chemberta-2: Towards chemical foundation models. arXiv preprint arXiv:2209.01712, 2022.
2. Iz Beltagy , Kyle Lo , and Arman Cohan . Scibert: A pretrained language model for scientific text. arXiv preprint arXiv:1903.10676, 2019.
3. Dimitris Bertsimas , Angela King , and Rahul Mazumder . Best subset selection via a modern optimization lens. arXiv: Methodology, 2015.
4. Genetic and non-genetic clonal diversity in cancer evolution
5. Daniil A Boiko , Robert MacKnight , and Gabe Gomes . Emergent autonomous scientific research capabilities of large language models. arXiv preprint arXiv: 2304.05332, 2023.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献