Abstract
Background
In most cases, the abstracts of articles in the medical domain are publicly available. Although these are accessible by everyone, they are hard to comprehend for a wider audience due to the complex medical vocabulary. Thus, simplifying these complex abstracts is essential to make medical research accessible to the general public.
Objective
This study aims to develop a deep learning–based text simplification (TS) approach that converts complex medical text into a simpler version while maintaining the quality of the generated text.
Methods
A TS approach using reinforcement learning and transformer–based language models was developed. Relevance reward, Flesch-Kincaid reward, and lexical simplicity reward were optimized to help simplify jargon-dense complex medical paragraphs to their simpler versions while retaining the quality of the text. The model was trained using 3568 complex-simple medical paragraphs and evaluated on 480 paragraphs via the help of automated metrics and human annotation.
Results
The proposed method outperformed previous baselines on Flesch-Kincaid scores (11.84) and achieved comparable performance with other baselines when measured using ROUGE-1 (0.39), ROUGE-2 (0.11), and SARI scores (0.40). Manual evaluation showed that percentage agreement between human annotators was more than 70% when factors such as fluency, coherence, and adequacy were considered.
Conclusions
A unique medical TS approach is successfully developed that leverages reinforcement learning and accurately simplifies complex medical paragraphs, thereby increasing their readability. The proposed TS approach can be applied to automatically generate simplified text for complex medical text data, which would enhance the accessibility of biomedical research to a wider audience.
Subject
Health Information Management,Health Informatics
Reference47 articles.
1. CarrollJMinnenGPearceDCanningYDevlinSTaitJSimplifying text for language-impaired readers1999Ninth Conference of the European Chapter of the Association for Computational LinguisticsJune 8-12, 1999Bergen, NorwayNew Brunswick, NJAssociation for Computational Linguistics269270
2. Unsupervised Lexical Simplification for Non-Native Speakers
3. PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification
4. Rebecca ThomasSAndersonSWordNet-Based Lexical Simplification of a DocumentProceedings of the 11th Conference on Natural Language Processing (KONVENS 2012)2012The 11th Conference on Natural Language Processing (KONVENS 2012)September 19-21, 2012Vienna, Austria80
5. Lexical Simplification with Pretrained Encoders
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献