Affiliation:
1. University of Georgia, Athens, GA, USA
2. Stanford University, Stanford, CA, USA
3. Stanford University, Palo Alto, CA, USA
Abstract
Artificial Intelligence (AI) is a transformative force in communication and messaging strategy, with potential to disrupt traditional approaches. Large language models (LLMs), a form of AI, are capable of generating high-quality, humanlike text. We investigate the persuasive quality of AI-generated messages to understand how AI could impact public health messaging. Specifically, through a series of studies designed to characterize and evaluate generative AI in developing public health messages, we analyze COVID-19 pro-vaccination messages generated by GPT-3, a state-of-the-art instantiation of a large language model. Study 1 is a systematic evaluation of GPT-3's ability to generate pro-vaccination messages. Study 2 then observed peoples' perceptions of curated GPT-3-generated messages compared to human-authored messages released by the CDC (Centers for Disease Control and Prevention), finding that GPT-3 messages were perceived as more effective, stronger arguments, and evoked more positive attitudes than CDC messages. Finally, Study 3 assessed the role of source labels on perceived quality, finding that while participants preferred AI-generated messages, they expressed dispreference for messages that were labeled as AI-generated. The results suggest that, with human supervision, AI can be used to create effective public health messages, but that individuals prefer their public health messages to come from human institutions rather than AI sources. We propose best practices for assessing generative outputs of large language models in future social science research and ways health professionals can use AI systems to augment public health messaging.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Human-Computer Interaction,Social Sciences (miscellaneous)
Reference95 articles.
1. Large language models associate Muslims with violence
2. Navigating the “Glimmer of Hope”: Challenges and Resilience among U.S. Older Adults in Seeking COVID-19 Vaccination
3. Kenneth C Arnold , April M Volzer , and Noah G Madrid . 2021 . Generative Models can Help Writers without Writing for Them .. In IUI Workshops. Kenneth C Arnold, April M Volzer, and Noah G Madrid. 2021. Generative Models can Help Writers without Writing for Them.. In IUI Workshops.
4. Vicious Viruses and Vigilant Vaccines: Effects of Linguistic Agency Assignment in Health Policy Advocacy
5. Rishi Bommasani Drew A Hudson Ehsan Adeli Russ Altman Simran Arora Sydney von Arx Michael S Bernstein Jeannette Bohg Antoine Bosselut Emma Brunskill etal 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021). Rishi Bommasani Drew A Hudson Ehsan Adeli Russ Altman Simran Arora Sydney von Arx Michael S Bernstein Jeannette Bohg Antoine Bosselut Emma Brunskill et al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021).
Cited by
27 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献