UNSTRUCTURED
Modern forms of technology-augmented healthcare are focusing on personalization of the delivery of medical services. This trend is driven in part by the growing rhetoric around patient diversity, empowerment, and choice as factors that impact the success of care. In parallel, there is a push for applying the latest advances in AI-based systems, especially intelligent agents (IA) or artificial agents (AA), as a way of autonomously carrying out and/or supporting interaction within healthcare service and personal health contexts. Robots, conversational agents, voice assistants, virtual characters—do these disparate forms of AI-based agents applied in care contexts have something in common? When and under what circumstances? Here we describe how they can manifest as “socially embodied AI,” which we define as the state an AI-based agent takes on when embedded within social and technologically nonpartisan “bodies” and contexts: a social form of human-AI interaction (HAII). We argue that this state is constructed by and dependent on human perception, arising when an embodied AI is perceived as having social characteristics and being socially interactive. Moreover, as a “circumstantial” category, we argue that if certain criteria are met, then any embodied AI can become socially embodied; however, this may not be true for all people at all times and in all situations. As a first step towards dealing with this complexity, we present an ontology for demarcating when embodied AI transition into socially embodied AI. We draw from theory and practice in human-machine communication (HMC), human-computer interaction (HCI), human-robot interaction (HRI), human-agent interaction (HAI), and social psychology. We reinforce our theoretical work with expert insights from a card sort workshop. We then propose an ontological heuristic for describing the dynamic threshold through which an AI-based agent becomes socially embodied: the Tepper line. We explore two case studies to illustrate the dynamic and contextual nature of this heuristic in healthcare contexts. We end by discussing possible implications of “crossing the Tepper line” from both AI- and human-centered viewpoints in person-centered care.