Evaluation of SURUS: a Named Entity Recognition System to Extract Knowledge from Interventional Study Records

Author:

Peeters CasperORCID,Vijverberg Koen,Pouwer MarianneORCID,Westerman BartORCID,Boot MaikelORCID,Verberne SuzanORCID

Abstract

BACKGROUNDMedical decision-making is commonly guided by systematic analysis of peer-reviewed scientific literature, published as systematic literature reviews (SLRs). These analyses are cumbersome to conduct as they require large amounts of time and subject matter expertise to be available. Automated extraction of key datapoints from clinical publications could speed up the process of systematic literature review assembly. To this end, we built, trained and validated SURUS, a named entity recognition (NER) system comprised of a Bidirectional Encoder Representations from Transformers (BERT) model trained on a highly granular dataset. The aim of this study was to assess the quality of classification of critical elements in clinical study abstracts by SURUS, in particular the patient, intervention, comparator and outcome (PICO) elements and elements of study design.DATASET & METHODSThe PubMedBERT-based model was trained and evaluated using a dataset of 400 interventional study abstracts, manually annotated by experts using 25 labels with a total of 39,531 annotations according to a strict annotation guideline, with Cohen’s κ inter-annotator agreement of 0.81. We evaluated in-domain quality, and assessed out-of-domain quality of the system by testing it on out-of-domain abstracts of other disease areas and observational study types. Finally, we tested the utility of SURUS by comparing its predictions to expert-assigned PICO and study design (PICOS) classifications.RESULTSThe SURUS NER system achieved an overall F1 score of 0.95, with minor deviation between labels. In addition, SURUS achieved a NER F1 of 0.90 for out-of-domain therapeutic area abstracts and 0.84 for observational study abstracts. Finally, SURUS showed considerable utility when compared to expert-assigned PICOS classifications of interventional studies, with an F1 of 0.89 and a recall of 0.96.CONCLUSIONTo our knowledge, with an F1 score of 0.95, SURUS ranks among the best-performing models available to date for conducting exhaustive systematic literature analyses. A strict guideline and high inter-annotation agreement resulted in high-quality in-domain medical entity of a finetuned BERT-based model, which was largely preserved during extensive out-of-domain evaluation, indicating its utility across other indication areas and study types. Current approaches in the field lack the granularity in training data and versatility demonstrated by the SURUS approach, thereby making the latter the preferred choice for automated extraction and classification tasks in the clinical literature domain. We think that this approach sets a new standard in medical literature analysis and paves the way for creating highly granular datasets of labelled entities that can be used for downstream analysis outside of traditional SLRs.

Publisher

Cold Spring Harbor Laboratory

Reference41 articles.

1. The rate of growth in scientific publication and the decline in coverage provided by Science Citation Index

2. Application of text mining in the biomedical domain

3. C. Lefebvre et al., ‘Chapter 4: Searching for and selecting studies’, in Cochrane Handbook for Systematic Reviews of Interventions, 6.2., Cochrane, 2021. Accessed: Jun. 30, 2021. [Online]. Available: https://training.cochrane.org/handbook/current/chapter-04

4. The significant cost of systematic reviews and meta-analyses: A call for greater involvement of machine learning to assess the promise of clinical trials

5. J. Higgins et al., ‘Chapter 4: Searching for and selecting studies’, in Cochrane Handbook for Systematic Reviews of Interventions, 6.2., Cochrane, 2021. Accessed: Jun. 30, 2021. [Online]. Available: https://training.cochrane.org/handbook/current/chapter-04

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3