Author:
Soellner Michaela,Koenigstorfer Joerg
Abstract
AbstractBackgroundThe goal of the study is to assess the downstream effects of who requests personal information from individuals for artificial intelligence-(AI) based healthcare research purposes—be it a pharmaceutical company (as an example of a for-profit organization) or a university hospital (as an example of a not-for-profit organization)—as well as their boundary conditions on individuals’ likelihood to release personal information about their health. For the latter, the study considers two dimensions: the tendency to self-disclose (which is aimed to be high so that AI applications can reach their full potential) and the tendency to falsify (which is aimed to be low so that AI applications are based on both valid and reliable data).MethodsAcross three experimental studies with Amazon Mechanical Turk workers from the U.S. (n = 204, n = 330, and n = 328, respectively), Covid-19 was used as the healthcare research context.ResultsUniversity hospitals (vs. pharmaceutical companies) score higher on altruism and lower on egoism. Individuals were more willing to disclose data if they perceived that the requesting organization acts based on altruistic motives (i.e., the motives function as gate openers). Individuals were more likely to protect their data by intending to provide false information when they perceived egoistic motives to be the main driver for the organization requesting their data (i.e., the motives function as a privacy protection tool). Two moderators, namely message appeal (Study 2) and message endorser credibility (Study 3) influence the two indirect pathways of the release of personal information.ConclusionThe findings add to Communication Privacy Management Theory as well as Attribution Theory by suggesting motive-based pathways to the release of correct personal health data. Compared to not-for-profit organizations, for-profit organizations are particularly recommended to match their message appeal with the organizations’ purposes (to provide personal benefit) and to use high-credibility endorsers in order to reduce inherent disadvantages in motive perceptions.
Funder
Technische Universität München
Publisher
Springer Science and Business Media LLC
Subject
Health Informatics,Health Policy,Computer Science Applications
Reference116 articles.
1. Haenssle HA, et al. Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann Oncol. 2018;29(8):1836–42.
2. Choi E, et al. Doctor AI: predicting clinical events via recurrent neural networks. In: Proceedings of the machine learning for healthcare conference, 2016. p. 301–18.
3. Esteva A, et al. A guide to deep learning in healthcare. Nat Med. 2019;25(1):24–9.
4. Bardhan I, Chen H, Karahanna E. Connecting systems, data, and people: a multidisciplinary research roadmap for chronic disease management. MIS Q. 2020;44(1):185–200.
5. Cassel C, Bindman A. Risk, benefit, and fairness in a big data world. J Am Med Assoc. 2019;322(2):105–6.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献