Affiliation:
1. Department of Information Systems and Cyber Security, Carlos Alvarez College of Business University of Texas at San Antonio San Antonio Texas USA
2. The Department of Information Systems, Statistics, and Management Science, Culverhouse College of Business The University of Alabama Tuscaloosa Alabama USA
Abstract
AbstractTo address various business challenges, organisations are increasingly employing artificial intelligence (AI) to analyse vast amounts of data. One application involves consolidating diverse user data into unified profiles, aggregating consumer behaviours to accurately tailor marketing efforts. Although AI provides more convenience to consumers and more efficient and profitable marketing for organisations, the act of aggregating data into behavioural profiles for use in machine learning algorithms introduces significant privacy implications for users, including unforeseeable personal disclosure, outcomes biased against marginalised population groups and organisations' inability to fully remove data from AI systems on consumer request. Although these implementations of AI are rapidly altering the way consumers perceive information privacy, researchers have thus far lacked an accurate method for measuring consumers' privacy concerns related to AI. In this study, we aim to (1) validate a scale for measuring privacy concerns related to AI misuse (PC‐AIM) and (2) examine the effects that PC‐AIM has on nomologically related constructs under the APCO framework. We provide evidence demonstrating the validity of our newly developed scale. We also find that PC‐AIM significantly increases risk beliefs and personal privacy advocacy behaviour, while decreasing trusting beliefs. Trusting beliefs and risk beliefs do not significantly affect behaviour, which differs from prior privacy findings. We further discuss the implications of our work on both research and practice.