Abstract
Abstract
Background
Big data and AI applications now play a major role in many health contexts. Much research has already been conducted on ethical and social challenges associated with these technologies. Likewise, there are already some studies that investigate empirically which values and attitudes play a role in connection with their design and implementation. What is still in its infancy, however, is the comparative investigation of the perspectives of different stakeholders.
Methods
To explore this issue in a multi-faceted manner, we conducted semi-structured interviews as well as focus group discussions with patients and clinicians. These empirical methods were used to gather interviewee’s views on the opportunities and challenges of medical AI and other data-intensive applications.
Results
Different clinician and patient groups are exposed to medical AI to differing degrees. Interviewees expect and demand that the purposes of data processing accord with patient preferences, and that data are put to effective use to generate social value. One central result is the shared tendency of clinicians and patients to maintain individualistic ascriptions of responsibility for clinical outcomes.
Conclusions
Medical AI and the proliferation of data with import for health-related inferences shape and partially reconfigure stakeholder expectations of how these technologies relate to the decision-making of human agents. Intuitions about individual responsibility for clinical outcomes could eventually be disrupted by the increasing sophistication of data-intensive and AI-driven clinical tools. Besides individual responsibility, systemic governance will be key to promote alignment with stakeholder expectations in AI-driven and data-intensive health settings.
Funder
Bundesministerium für Gesundheit
Ministerie van Onderwijs, Cultuur en Wetenschap
Publisher
Springer Science and Business Media LLC
Reference83 articles.
1. Altmann S, Milsom L, Zillessen H, Blasone R, Gerdon F, Bach R et al (2020) Acceptability of app-based contact tracing for COVID-19: cross-country survey study. JMIR Mhealth Uhealth 8:e19857. https://doi.org/10.2196/19857
2. Amann J, Vetter D, Blomberg SN, Christensen HC, Coffee M, Gerke S et al (2022) To explain or not to explain?—artificial intelligence explainability in clinical decision support systems. PLOS Digital Health 1:e0000016. https://doi.org/10.1371/journal.pdig.0000016
3. Andreu-Perez J, Poon CCY, Merrifield RD, Wong STC, Yang G-Z (2015) Big data for health. IEEE J Biomed Health Inform 19:1193–1208. https://doi.org/10.1109/JBHI.2015.2450362
4. Baškarada S (2014) Qualitative case study guidelines. TQR. https://doi.org/10.46743/2160-3715/2014.1008
5. Berthelsen CB, Grimshaw-Aagaard S, Hansen C (2018) Developing a guideline for reporting and evaluating grounded theory research studies (GUREGT). Int J Health Sci 6:64–76