Abstract
BackgroundHospital discharge summaries play an essential role in informing GPs of recent admissions to ensure excellent continuity of care and prevent adverse events, however, they are notoriously poorly written, time-consuming and can result in delayed discharge.AimTo evaluate the potential of AI to produce high-quality discharge summaries equivalent to the level of a doctor who has completed the UK Foundation Programme.Design & settingFeasibility study using 25 mock patient vignettes.MethodUsing the 25 mock patient vignettes, 25 ChatGPT and 25 junior doctor written discharge summaries were generated. Quality and suitability was determined through both independent GP evaluators and adherence to a minimum dataset.ResultsOf the 25 AI written discharge summaries 100% were deemed by GPs to be of an acceptable quality compared to that of 92% for the junior doctor summaries. They both showed a mean compliance of 97% with the minimum dataset. In addition, the ability of GPs to determine if the summary was written by ChatGPT was poor, with only a 60% accuracy of detection. Similarly, when run through an AI detection tool all were recognised as being very unlikely to be written by AI.ConclusionAI has proven to produce discharge summaries of equivalent quality as a doctor who has completed the UK Foundation Programme, however, larger studies with real-world patient data with NHS-approved AI tools will need to be conducted.
Publisher
Royal College of General Practitioners
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献