Affiliation:
1. Department of Otolaryngology—Head and Neck Surgery Rutgers New Jersey Medical School Newark New Jersey USA
2. Department of Architecture and Territory Mediterranean University of Reggio Calabria Calabria Italy
3. Department of Landscape Architecture International Credit Hours Engineering Programs of Ain Shams University Cairo Egypt
4. Arclivia Bayonne NJ United States
5. Department of Otolaryngology—Head and Neck Surgery University of Florida College of Medicine Gainesville Florida USA
6. Department of Ophthalmology and Visual Science Rutgers New Jersey Medical School Newark New Jersey USA
7. Center for Skull Base and Pituitary Surgery, Neurological Institute of New Jersey Rutgers New Jersey Medical School Newark New Jersey USA
8. Department of Neurological Surgery Rutgers New Jersey Medical School Newark New Jersey USA
9. Department of Otolaryngology and Facial Plastic Surgery Cooperman Barnabas Medical Center—RWJBarnabas Health Livingston New Jersey USA
Abstract
AbstractObjectivesArtificial intelligence is evolving and significantly impacting health care, promising to transform access to medical information. With the rise of medical misinformation and frequent internet searches for health‐related advice, there is a growing demand for reliable patient information. This study assesses the effectiveness of ChatGPT in providing information and treatment options for chronic rhinosinusitis (CRS).MethodsSix inputs were entered into ChatGPT regarding the definition, prevalence, causes, symptoms, treatment options, and postoperative complications of CRS. International Consensus Statement on Allergy and Rhinology guidelines for Rhinosinusitis was the gold standard for evaluating the answers. The inputs were categorized into three categories and Flesch–Kincaid readability, ANOVA and trend analysis tests were used to assess them.ResultsAlthough some discrepancies were found regarding CRS, ChatGPT's answers were largely in line with existing literature. Mean Flesch Reading Ease, Flesch–Kincaid Grade Level and passive voice percentage were (40.7%, 12.15%, 22.5%) for basic information and prevalence category, (47.5%, 11.2%, 11.1%) for causes and symptoms category, (33.05%, 13.05%, 22.25%) for treatment and complications, and (40.42%, 12.13%, 18.62%) across all categories. ANOVA indicated no statistically significant differences in readability across the categories (p‐values: Flesch Reading Ease = 0.385, Flesch–Kincaid Grade Level = 0.555, Passive Sentences = 0.601). Trend analysis revealed readability varied slightly, with a general increase in complexity.ConclusionChatGPT is a developing tool potentially useful for patients and medical professionals to access medical information. However, caution is advised as its answers may not be fully accurate compared to clinical guidelines or suitable for patients with varying educational backgrounds.Level of evidence: 4.
Reference54 articles.
1. The potential for artificial intelligence in healthcare;Wen Z;J Commer Biotechnol,2022
2. A compendium of various applications of machine learning;Siwach M;Int J Res Eng Technol,2022
3. Empirical Approach to Machine Learning
4. MohammadSM.Ethics sheets for AI tasks.arXiv preprint arXiv:2107.011832021.