Affiliation:
1. Institute of Biomedical Ethics and History of Medicine University of Zurich Zurich Switzerland
2. Digital Society Initiative University of Zurich Zurich Switzerland
Abstract
AbstractMental health apps bring unprecedented benefits and risks to individual and public health. A thorough evaluation of these apps involves considering two aspects that are often neglected: the algorithms they deploy and the functions they perform. We focus on mental health apps based on black box algorithms, explore their forms of opacity, discuss the implications derived from their opacity, and propose how to use their outcomes in mental healthcare, self‐care practices, and research. We argue that there is a relevant distinction between functions performed by algorithms in mental health apps, and we focus on the functions of analysis and generation of advice. When performing analytic functions, such as identifying patterns and making predictions concerning people's emotions, thoughts, and behaviors, black box algorithms can be better than other algorithms to provide information to identify early signs of relapse, support diagnostic processes, and improve research by generating outcomes that lead to a better understanding of mental health. However, when carrying out the function of providing mental health advice, black box algorithms have the potential to deliver unforeseen advice that may harm users. We argue that the outcomes of these apps may be trustworthy as a complementary source of information, but express caution about black box algorithms that give advice directly to users. To reap the benefits of mental health apps based on black box algorithms and avoid unintended consequences, we critically need to know whether these algorithms are fulfilling the function of providing mental health advice.
Subject
Health Policy,Philosophy,Health (social science)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献