Assessing the Accuracy of Artificial Intelligence Models in Scoliosis Classification and Suggested Therapeutic Approaches
-
Published:2024-07-09
Issue:14
Volume:13
Page:4013
-
ISSN:2077-0383
-
Container-title:Journal of Clinical Medicine
-
language:en
-
Short-container-title:JCM
Author:
Fabijan Artur1ORCID, Zawadzka-Fabijan Agnieszka2ORCID, Fabijan Robert3, Zakrzewski Krzysztof1, Nowosławska Emilia1, Polis Bartosz1
Affiliation:
1. Department of Neurosurgery, Polish-Mother’s Memorial Hospital Research Institute, 93-338 Lodz, Poland 2. Department of Rehabilitation Medicine, Faculty of Health Sciences, Medical University of Lodz, 90-419 Lodz, Poland 3. Independent Researcher, Luton LU2 0GS, UK
Abstract
Background: Open-source artificial intelligence models (OSAIMs) are increasingly being applied in various fields, including IT and medicine, offering promising solutions for diagnostic and therapeutic interventions. In response to the growing interest in AI for clinical diagnostics, we evaluated several OSAIMs—such as ChatGPT 4, Microsoft Copilot, Gemini, PopAi, You Chat, Claude, and the specialized PMC-LLaMA 13B—assessing their abilities to classify scoliosis severity and recommend treatments based on radiological descriptions from AP radiographs. Methods: Our study employed a two-stage methodology, where descriptions of single-curve scoliosis were analyzed by AI models following their evaluation by two independent neurosurgeons. Statistical analysis involved the Shapiro–Wilk test for normality, with non-normal distributions described using medians and interquartile ranges. Inter-rater reliability was assessed using Fleiss’ kappa, and performance metrics, like accuracy, sensitivity, specificity, and F1 scores, were used to evaluate the AI systems’ classification accuracy. Results: The analysis indicated that although some AI systems, like ChatGPT 4, Copilot, and PopAi, accurately reflected the recommended Cobb angle ranges for disease severity and treatment, others, such as Gemini and Claude, required further calibration. Particularly, PMC-LLaMA 13B expanded the classification range for moderate scoliosis, potentially influencing clinical decisions and delaying interventions. Conclusions: These findings highlight the need for the continuous refinement of AI models to enhance their clinical applicability.
Reference67 articles.
1. A Comparative Analysis of AI Models in Complex Medical Decision-Making Scenarios: Evaluating ChatGPT, Claude AI, Bard, and Perplexity;Uppalapati;Cureus,2024 2. Zhang, H., Huang, C., Wang, D., Li, K., Han, X., Chen, X., and Li, Z. (2023). Artificial Intelligence in Scoliosis: Current Applications and Future Directions. J. Clin. Med., 12. 3. Zong, H., Li, J., Wu, E., Wu, R., Lu, J., and Shen, B. (2024). Performance of ChatGPT on Chinese national medical licensing examinations: A five-year examination evaluation study for physicians, pharmacists and nurses. BMC Med. Educ., 24. 4. Artificial intelligence: ChatGPT as a disruptive didactic strategy in dental education;J. Dent. Educ.,2024 5. Artificial intelligence compared with human-derived patient educational materials on cirrhosis;Pradhan;Hepatol. Commun.,2024
|
|