Medical text classification based on the discriminative pre-training model and prompt-tuning

Author:

Wang Yu1ORCID,Wang Yuan2,Peng Zhenwan1,Zhang Feifan1ORCID,Zhou Luyao1,Yang Fei1

Affiliation:

1. School of Biomedical Engineering, Anhui Medical University, Hefei, China

2. Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China

Abstract

Medical text classification, as a fundamental medical natural language processing task, aims to identify the categories to which a short medical text belongs. Current research has focused on performing the medical text classification task using a pre-training language model through fine-tuning. However, this paradigm introduces additional parameters when training extra classifiers. Recent studies have shown that the “prompt-tuning” paradigm induces better performance in many natural language processing tasks because it bridges the gap between pre-training goals and downstream tasks. The main idea of prompt-tuning is to transform binary or multi-classification tasks into mask prediction tasks by fully exploiting the features learned by pre-training language models. This study explores, for the first time, how to classify medical texts using a discriminative pre-training language model called ERNIE-Health through prompt-tuning. Specifically, we attempt to perform prompt-tuning based on the multi-token selection task, which is a pre-training task of ERNIE-Health. The raw text is wrapped into a new sequence with a template in which the category label is replaced by a [UNK] token. The model is then trained to calculate the probability distribution of the candidate categories. Our method is tested on the KUAKE-Question Intention Classification and CHiP-Clinical Trial Criterion datasets and obtains the accuracy values of 0.866 and 0.861. In addition, the loss values of our model decrease faster throughout the training period compared to the fine-tuning. The experimental results provide valuable insights to the community and suggest that prompt-tuning can be a promising approach to improve the performance of pre-training models in domain-specific tasks.

Funder

Initiation Fund of Anhui Medical University

Natural Science Foundation of Anhui Province of China

Publisher

SAGE Publications

Subject

Health Information Management,Computer Science Applications,Health Informatics,Health Policy

Cited by 7 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Prompt Engineering Paradigms for Medical Applications: Scoping Review;Journal of Medical Internet Research;2024-09-10

2. Prompt Engineering Paradigms for Medical Applications: Scoping Review (Preprint);2024-05-14

3. A hybrid natural language processing model for short text classification using BCBLEM;2024 3rd International Conference on Artificial Intelligence For Internet of Things (AIIoT);2024-05-03

4. Large Language Models in Randomized Controlled Trials Design;2024-04-26

5. Hierarchical Text Classification of Chinese Public Security Cases Based on ERNIE 3.0 Model;2024 5th International Conference on Computer Vision, Image and Deep Learning (CVIDL);2024-04-19

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3