Affiliation:
1. Department of Pharmacology & Therapeutics, College of Medicine & Medical Sciences Arabian Gulf University Manama Kingdom of Bahrain
Abstract
AbstractAimsThis study compared three artificial intelligence (AI) platforms' potential to identify drug therapy communication competencies expected of a graduating medical doctor.MethodsWe presented three AI platforms, namely, Poe Assistant©, ChatGPT© and Google Bard©, with structured queries to generate communication skill competencies and case scenarios appropriate for graduating medical doctors. These case scenarios comprised 15 prototypical medical conditions that required drug prescriptions. Two authors independently evaluated the AI‐enhanced clinical encounters, which integrated a diverse range of information to create patient‐centred care plans. Through a consensus‐based approach using a checklist, the communication components generated for each scenario were assessed. The instructions and warnings provided for each case scenario were evaluated by referencing the British National Formulary.ResultsAI platforms demonstrated overlap in competency domains generated, albeit with variations in wording. The domains of knowledge (basic and clinical pharmacology, prescribing, communication and drug safety) were unanimously recognized by all platforms. A broad consensus among Poe Assistant© and ChatGPT© on drug therapy‐related communication issues specific to each case scenario was evident. The consensus primarily encompassed salutation, generic drug prescribed, treatment goals and follow‐up schedules. Differences were observed in patient instruction clarity, listed side effects, warnings and patient empowerment. Google Bard did not provide guidance on patient communication issues.ConclusionsAI platforms recognized competencies with variations in how these were stated. Poe Assistant© and ChatGPT© exhibited alignment of communication issues. However, significant discrepancies were observed in specific skill components, indicating the necessity of human intervention to critically evaluate AI‐generated outputs.