CACER: Clinical concept Annotations for Cancer Events and Relations

Author:

Fu Yujuan Velvin1,Ramachandran Giridhar Kaushik2ORCID,Halwani Ahmad3,McInnes Bridget T4,Xia Fei5,Lybarger Kevin2ORCID,Yetisgen Meliha1,Uzuner Özlem2ORCID

Affiliation:

1. Department of Biomedical Informatics & Medical Education, University of Washington , Seattle, WA 98195, United States

2. Department of Information Sciences and Technology, George Mason University , Fairfax, VA 22030, United States

3. Huntsman Cancer Institute, University of Utah , Salt Lake City, UT 84112, United States

4. Department of Computer Science, Virginia Commonwealth University , Richmond, VA 23284, United States

5. Department of Linguistics, University of Washington , Seattle, WA 98195, United States

Abstract

Abstract Objective Clinical notes contain unstructured representations of patient histories, including the relationships between medical problems and prescription drugs. To investigate the relationship between cancer drugs and their associated symptom burden, we extract structured, semantic representations of medical problem and drug information from the clinical narratives of oncology notes. Materials and Methods We present Clinical concept Annotations for Cancer Events and Relations (CACER), a novel corpus with fine-grained annotations for over 48 000 medical problems and drug events and 10 000 drug-problem and problem-problem relations. Leveraging CACER, we develop and evaluate transformer-based information extraction models such as Bidirectional Encoder Representations from Transformers (BERT), Fine-tuned Language Net Text-To-Text Transfer Transformer (Flan-T5), Large Language Model Meta AI (Llama3), and Generative Pre-trained Transformers-4 (GPT-4) using fine-tuning and in-context learning (ICL). Results In event extraction, the fine-tuned BERT and Llama3 models achieved the highest performance at 88.2-88.0 F1, which is comparable to the inter-annotator agreement (IAA) of 88.4 F1. In relation extraction, the fine-tuned BERT, Flan-T5, and Llama3 achieved the highest performance at 61.8-65.3 F1. GPT-4 with ICL achieved the worst performance across both tasks. Discussion The fine-tuned models significantly outperformed GPT-4 in ICL, highlighting the importance of annotated training data and model optimization. Furthermore, the BERT models performed similarly to Llama3. For our task, large language models offer no performance advantage over the smaller BERT models. Conclusions We introduce CACER, a novel corpus with fine-grained annotations for medical problems, drugs, and their relationships in clinical narratives of oncology notes. State-of-the-art transformer models achieved performance comparable to IAA for several extraction tasks.

Funder

National Institutes of Health

National Library of Medicine

Publisher

Oxford University Press (OUP)

Reference75 articles.

1. Narrative writing: effective ways and best practices;Ledade;Perspect Clin Res,2017

2. The persistence of symptom burden: symptom experience and quality of life of cancer patients across one year;Deshields;Support Care Cancer,2014

3. From chemotherapy to biological therapy: a review of novel concepts to reduce the side effects of systemic cancer treatment;Schirrmacher;Int J Oncol,2019

4. Generalizing through forgetting-domain generalization for symptom event extraction in clinical notes;Zhou;AMIA Summit Transl Sci Proc,2023

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3