A Bibliometric Analysis of the Rise of ChatGPT in Medical Research

Author:

Barrington Nikki M.1,Gupta Nithin2ORCID,Musmar Basel3,Doyle David4,Panico Nicholas5,Godbole Nikhil6ORCID,Reardon Taylor7,D’Amico Randy S.8

Affiliation:

1. Chicago Medical School, Rosalind Franklin University, North Chicago, IL 60064, USA

2. School of Osteopathic Medicine, Campbell University, Lillington, NC 27546, USA

3. Faculty of Medicine and Health Sciences, An-Najah National University, Nablus P.O. Box 7, West Bank, Palestine

4. Central Michigan College of Medicine, Mount Pleasant, MI 48858, USA

5. Lake Erie College of Osteopathic Medicine, Erie, PA 16509, USA

6. School of Medicine, Tulane University, New Orleans, LA 70112, USA

7. Department of Neurology, Henry Ford Hospital, Detroit, MI 48202, USA

8. Department of Neurosurgery, Lenox Hill Hospital, New York, NY 10075, USA

Abstract

The rapid emergence of publicly accessible artificial intelligence platforms such as large language models (LLMs) has led to an equally rapid increase in articles exploring their potential benefits and risks. We performed a bibliometric analysis of ChatGPT literature in medicine and science to better understand publication trends and knowledge gaps. Following title, abstract, and keyword searches of PubMed, Embase, Scopus, and Web of Science databases for ChatGPT articles published in the medical field, articles were screened for inclusion and exclusion criteria. Data were extracted from included articles, with citation counts obtained from PubMed and journal metrics obtained from Clarivate Journal Citation Reports. After screening, 267 articles were included in the study, most of which were editorials or correspondence with an average of 7.5 +/− 18.4 citations per publication. Published articles on ChatGPT were authored largely in the United States, India, and China. The topics discussed included use and accuracy of ChatGPT in research, medical education, and patient counseling. Among non-surgical specialties, radiology published the most ChatGPT-related articles, while plastic surgery published the most articles among surgical specialties. The average citation number among the top 20 most-cited articles was 60.1 +/− 35.3. Among journals with the most ChatGPT-related publications, there were on average 10 +/− 3.7 publications. Our results suggest that managing the inevitable ethical and safety issues that arise with the implementation of LLMs will require further research exploring the capabilities and accuracy of ChatGPT, to generate policies guiding the adoption of artificial intelligence in medicine and science.

Publisher

MDPI AG

Subject

General Medicine

Reference53 articles.

1. Large Language Models and the Reverse Turing Test;Sejnowski;Neural Comput.,2023

2. Kung, T.H., Cheatham, M., Medenilla, A., Sillos, C., De Leon, L., Elepaño, C., Madriaga, M., Aggabao, R., Diaz-Candido, G., and Maningo, J. (2023). Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLoS Digit. Health, 2.

3. Jeblick, K., Schachtner, B., Dexl, J., Mittermeier, A., Stüber, A.T., Topalis, J., Weber, T., Wesp, P., Sabel, B., and Ricke, J. (2022). ChatGPT Makes Medicine Easy to Swallow: An Exploratory Case Study on Simplified Radiology Reports. arXiv, Available online: http://arxiv.org/abs/2212.14882.

4. (2023, September 04). Introducing ChatGPT. Available online: https://openai.com/blog/chatgpt.

5. Ethics of large language models in medicine and medical research;Li;Lancet Digit. Health,2023

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3