Affiliation:
1. Department of Otolaryngology–Head and Neck Surgery Rutgers New Jersey Medical School Newark New Jersey USA
Abstract
AbstractObjectiveEvaluate the quality of responses from Chat Generative Pre‐Trained Transformer (ChatGPT) models compared to the answers for “Frequently Asked Questions” (FAQs) from the American Academy of Otolaryngology–Head and Neck Surgery (AAO‐HNS) Clinical Practice Guidelines (CPG) for Ménière's disease (MD).Study DesignComparative analysis.SettingThe AAO‐HNS CPG for MD includes FAQs that clinicians can give to patients for MD‐related questions. The ability of ChatGPT to properly educate patients regarding MD is unknown.MethodsChatGPT‐3.5 and 4.0 were each prompted with 16 questions from the MD FAQs. Each response was rated in terms of (1) comprehensiveness, (2) extensiveness, (3) presence of misleading information, and (4) quality of resources. Readability was assessed using Flesch‐Kincaid Grade Level (FKGL) and Flesch Reading Ease Score (FRES).ResultsChatGPT‐3.5 was comprehensive in 5 responses whereas ChatGPT‐4.0 was comprehensive in 9 (31.3% vs 56.3%, P = .2852). ChatGPT‐3.5 and 4.0 were extensive in all responses (P = 1.0000). ChatGPT‐3.5 was misleading in 5 responses whereas ChatGPT‐4.0 was misleading in 3 (31.3% vs 18.75%, P = .6851). ChatGPT‐3.5 had quality resources in 10 responses whereas ChatGPT‐4.0 had quality resources in 16 (62.5% vs 100%, P = .0177). AAO‐HNS CPG FRES (62.4 ± 16.6) demonstrated an appropriate readability score of at least 60, while both ChatGPT‐3.5 (39.1 ± 7.3) and 4.0 (42.8 ± 8.5) failed to meet this standard. All platforms had FKGL means that exceeded the recommended level of 6 or lower.ConclusionWhile ChatGPT‐4.0 had significantly better resource reporting, both models have room for improvement in being more comprehensive, more readable, and less misleading for patients.