Author:
Huang Zhilian,Lim Hannah Yee-Fen,Ow Jing Teng,Sun Shirley Hsiao-Li,Chow Angela
Abstract
ObjectivesThe increased utilization of Artificial intelligence (AI) in healthcare changes practice and introduces ethical implications for AI adoption in medicine. We assess medical doctors’ ethical stance in situations that arise in adopting an AI-enabled Clinical Decision Support System (AI-CDSS) for antibiotic prescribing decision support in a healthcare institution in Singapore.MethodsWe conducted in-depth interviews with 30 doctors of varying medical specialties and designations between October 2022 and January 2023. Our interview guide was anchored on the four pillars of medical ethics. We used clinical vignettes with the following hypothetical scenarios: (1) Using an antibiotic AI-enabled CDSS’s recommendations for a tourist, (2) Uncertainty about the AI-CDSS’s recommendation of a narrow-spectrum antibiotic vs. concerns about antimicrobial resistance, (3) Patient refusing the “best treatment” recommended by the AI-CDSS, (4) Data breach.ResultsMore than half of the participants only realized that the AI-enabled CDSS could have misrepresented non-local populations after being probed to think about the AI-CDSS’s data source. Regarding prescribing a broad- or narrow-spectrum antibiotic, most participants preferred to exercise their clinical judgment over the AI-enabled CDSS’s recommendations in their patients’ best interest. Two-thirds of participants prioritized beneficence over patient autonomy by convincing patients who refused the best practice treatment to accept it. Many were unaware of the implications of data breaches.ConclusionThe current position on the legal liability concerning the use of AI-enabled CDSS is unclear in relation to doctors, hospitals and CDSS providers. Having a comprehensive ethical legal and regulatory framework, perceived organizational support, and adequate knowledge of AI and ethics are essential for successfully implementing AI in healthcare.
Funder
Nanyang Technological University