BACKGROUND
Low back pain (LBP) is a significant global public health concern with a large burden of disease on the population. With the increasing integration of AI technologies in healthcare, it is essential to evaluate their effectiveness in providing high-quality and accurate information when addressing common LBP concerns.
OBJECTIVE
The purpose of this research is to examine the health information quality and accuracy of conversational agents (CAs) and generative AI (GAI) models in response to questions about LBP.
METHODS
A systematic evaluation was conducted on four commonly used CAs and two GAI models using a piloted script of 25 prompts covering various aspects of LBP, including causes, treatment, ability to exercise and work, and imaging. The responses were compiled and transcribed to assess their quality and accuracy. The quality of the responses was assessed using the JAMA benchmark criteria and the DISCERN tool. The accuracy of the responses was assessed by comparing them to the UK NICE Back Pain and Sciatica guidelines and the Australian Lower Back Pain Clinical Care Standard.
RESULTS
The study revealed significant variation in both information quality and accuracy across different CAs and GAI models. Overall, responses exhibited poor quality but moderate accuracy. Siri demonstrated the best overall performance based on a combination of information quality and accuracy scores, whereas voice-only CAs performed the worst in both measures. GAI models had the highest information accuracy but lower information quality overall.
CONCLUSIONS
The findings highlight the necessity for improvements in AI health information delivery to ensure the public received reliable and up-to-date information regarding health issues such as LBP.
CLINICALTRIAL
N/A