Author:
Denecke Kerstin,May Richard,Rivera-Romero Octavio
Abstract
AbstractLarge Language Models (LLMs) such as General Pretrained Transformer (GPT) and Bidirectional Encoder Representations from Transformers (BERT), which use transformer model architectures, have significantly advanced artificial intelligence and natural language processing. Recognized for their ability to capture associative relationships between words based on shared context, these models are poised to transform healthcare by improving diagnostic accuracy, tailoring treatment plans, and predicting patient outcomes. However, there are multiple risks and potentially unintended consequences associated with their use in healthcare applications. This study, conducted with 28 participants using a qualitative approach, explores the benefits, shortcomings, and risks of using transformer models in healthcare. It analyses responses to seven open-ended questions using a simplified thematic analysis. Our research reveals seven benefits, including improved operational efficiency, optimized processes and refined clinical documentation. Despite these benefits, there are significant concerns about the introduction of bias, auditability issues and privacy risks. Challenges include the need for specialized expertise, the emergence of ethical dilemmas and the potential reduction in the human element of patient care. For the medical profession, risks include the impact on employment, changes in the patient-doctor dynamic, and the need for extensive training in both system operation and data interpretation.
Funder
Bern University of Applied Sciences
Publisher
Springer Science and Business Media LLC
Reference34 articles.
1. A. Vaswani et al, ‘Attention is All you Need’, in Advances in Neural Information Processing Systems, Curran Associates, Inc., 2017. Accessed: Jun. 18, 2023. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html
2. Q. Wang et al, ‘Learning Deep Transformer Models for Machine Translation’, 2019, doi: https://doi.org/10.48550/ARXIV.1906.01787.
3. W. Wang, Z. Yang, Y. Gao, and H. Ney, ‘Transformer-Based Direct Hidden Markov Model for Machine Translation’, in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop, Online: Association for Computational Linguistics, 2021, pp. 23–32. doi: https://doi.org/10.18653/v1/2021.acl-srw.3.
4. G. Moro, L. Ragazzi, L. Valgimigli, G. Frisoni, C. Sartori, and G. Marfia, ‘Efficient Memory-Enhanced Transformer for Long-Document Summarization in Low-Resource Regimes’, Sensors, vol. 23, no. 7, p. 3542, Mar. 2023, doi: https://doi.org/10.3390/s23073542.
5. X. Dai, I. Chalkidis, S. Darkner, and D. Elliott, ‘Revisiting Transformer-based Models for Long Document Classification’. arXiv, Oct. 25, 2022. Accessed: Feb. 03, 2024. [Online]. Available: http://arxiv.org/abs/2204.06683
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献