BACKGROUND
Medical documentation plays a crucial role in clinical practice, facilitating accurate patient management and communication among healthcare professionals. However, inaccuracies in medical notes can lead to miscommunication and diagnostic errors. Additionally, the demands of documentation contribute to physician burnout. While intermediaries like medical scribes and speech recognition software have been used to ease this burden, they have limitations in terms of accuracy and addressing provider-specific metrics. The integration of ambient AI-powered solutions offers a promising way to improve documentation while fitting seamlessly into existing workflows.
OBJECTIVE
This study aims to assess the accuracy and quality of SOAP (Subjective, Objective, Assessment, and Plan) notes generated by ChatGPT-4, an AI model, using established transcripts of History and Physicals (H&Ps). We seek to identify potential errors and evaluate the model's performance across different categories.
METHODS
We conducted simulated patient-provider encounters representing various ambulatory specialties and transcribed the audio files. Key reportable elements were identified, and ChatGPT-4 was used to generate SOAP notes based on these transcripts. Three versions of each note were created, and errors were categorized as omissions, incorrect information, or additions. We compared the accuracy of data elements across versions, transcript length, and data categories. Additionally, we assessed note quality using the Physician Documentation Quality Instrument (PDQI) scoring system.
RESULTS
While ChatGPT-4 consistently generated SOAP-style notes, there were, on average, 23.6 errors per clinical case, with errors of omission being the most common (86%), followed by addition errors (10.5%) and inclusion of incorrect facts (3.2%). There was significant variance between replicates of the same case with only 52.9% of data elements reported correctly across all 3 replicates. The accuracy of data elements varied across cases, with the highest accuracy observed in the objective section. Consequently, measure of note quality, as assessed by PDQI demonstrated with intra and inter-case variance. Finally, the accuracy of ChatGPT-4 was inversely correlated to both the transcript length (P=0.003) and number of scorable data elements (P=0.003).
CONCLUSIONS
Our study reveals substantial variability in errors, accuracy, and note quality generated by ChatGPT-4. Errors were not limited to specific sections, and the inconsistency in error types across replicates complicates predictability. Transcript length and data complexity inversely correlated with note accuracy, raising concerns about the model's effectiveness in handling complex medical cases. The quality and reliability of AI-generated clinical notes produced by ChatGPT-4 do not meet the standards required for clinical use. While AI holds promise in healthcare, caution should be exercised before widespread adoption. Further research is needed to address the issues of accuracy, variability, and potential errors. ChatGPT-4, while a valuable tool in various applications, should not be considered a safe alternative to human-generated clinical documentation at this time.