Abstract
AbstractSome have heralded generative AI models as an opportunity to inform diplomacy and support diplomats’ communication campaigns. Others have argued that generative AI is inherently untrustworthy because it simply manages probabilities and doesn’t consider the truth value of statements. In this article, we examine how AI applications are built to smooth over uncertainty by providing a single answer among multiple possible answers and by presenting information in a tone and form that demands authority. We contrast this with the practices of public diplomacy professionals who must grapple with both epistemic and aleatory uncertainty head on to effectively manage complexities through negotiation. We argue that the rise of generative AI and its “operationalization of truth” invites us to reflect on the possible shortcoming of AI’s application to public diplomacy practices and to recognize how prominent uncertainty is in public diplomacy practices.
Funder
Western Sydney University
Publisher
Springer Science and Business Media LLC