UNSTRUCTURED
Technology is increasingly being used in our decision-making in all fields, but especially in healthcare. Automated decision-making (ADM) promises to change medical practice and potentially improve and streamline the provision of healthcare. Although the integration of AI into medicine is encouraging, it also is met with fears concerning transparency and accountability. This is where the right to explanation has come in. Legislators and policymakers have relied on the right to explanation, a new right guaranteed to those who are affected by ADM, to ease fears surrounding AI. This is particularly apparent in Quebec, where recently legislators have passed Law 5, An Act respecting health and social services information and amending various legislative provisions. This paper explores the practical implications of Law 5, and by extension of the right to explanation internationally, in the healthcare field. We highlight that the right to explanation is anticipated to alter physician’s obligation to patients, namely concerning the duty to inform. However, we also discuss how the drafting of the legislation on right to explanation is vague and hard to enforce. This dilutes the potential of the right to explanation to provide meaningful protections for those affected by automated decisions. After all, artificial intelligence is a complex and innovative technology and, as such, requires complex and innovative policies. The right to explanation is not necessarily the answer.