Abstract
AbstractConsidering the overall shortage of therapists to meet the psychological needs of vulnerable populations, AI-based technologies are often seen as a possible remedy. Particularly smartphone apps or chatbots are increasingly used to offer mental health support, mostly through cognitive behavioral therapy. The assumption underlying the deployment of these systems is their ability to make mental health support accessible to generally underserved populations. Hence, this seems to be aligned with the fundamental biomedical principle of justice understood in its distributive meaning. However, considerations of the principle of justice in its epistemic significance are still in their infancy in the debates revolving around the ethical issues connected to the use of mental health chatbots. This paper aims to fill this research gap, focusing on a less familiar kind of harm that these systems can cause, namely the harm to users in their capacities as knowing subjects. More specifically, we frame our discussion in terms of one form of epistemic injustice that such practices are especially prone to bring about, i.e., participatory injustice. To make our theoretical analysis more graspable and to show its urgency, we discuss the case of a mental health Chatbot, Karim, deployed to deliver mental health support to Syrian refugees. This case substantiates our theoretical considerations and the epistemo-ethical concerns arising from the use of mental health applications among vulnerable populations. Finally, we argue that conceptualizing epistemic participation as a capability within the framework of Capability Sensitive Design can be a first step toward ameliorating the participatory injustice discussed in this paper.
Funder
Horizon 2020
H2020 European Research Council
Publisher
Springer Science and Business Media LLC