Affiliation:
1. Hudson College of Public Health University of Oklahoma Health Sciences Oklahoma City Oklahoma USA
2. Lyndon B. Johnson School of Public Affairs University of Texas at Austin Austin Texas USA
Abstract
AbstractThe widespread embrace of Large Language Models (LLMs) integrated with chatbot interfaces, such as ChatGPT, represents a potentially critical moment in the development of risk communication and management. In this article, we consider the implications of the current wave of LLM‐based chat programs for risk communication. We examine ChatGPT‐generated responses to 24 different hazard situations. We compare these responses to guidelines published for public consumption on the US Department of Homeland Security's Ready.gov website. We find that, although ChatGPT did not generate false or misleading responses, ChatGPT responses were typically less than optimal in terms of their similarity to guidances from the federal government. While delivered in an authoritative tone, these responses at times omitted important information and contained points of emphasis that were substantially different than those from Ready.gov. Moving forward, it is critical that researchers and public officials both seek to harness the power of LLMs to inform the public and acknowledge the challenges represented by a potential shift in information flows away from public officials and experts and towards individuals.
Reference32 articles.
1. Alschner Wolfgang JuliaSeiermann andDmitriySkougarevskiy. 2017. “The Impact of the TPP on Trade between Member Countries: A Text‐as‐Data Approach.” ADBI Working Paper.
2. The Evolving Field of Risk Communication
3. On the Opportunities and Risks of Foundation Models;Bommasani Rishi;arXiv,2021
4. Bouchet‐Valat Milan.2023. “SnowballC: Snowball Stemmers Based on the C ‘libstemmer’ UTF‐8 Library.” R package version 0.7.1.https://CRAN.R-project.org/package=SnowballC
5. Addressing Health-Related Misinformation on Social Media