Abstract
AbstractRecent advances in natural language generation (NLG), such as public accessibility to ChatGPT, have sparked polarised debates about the societal impact of this technology. Popular discourse tends towards either overoptimistic hype that touts the radically transformative potentials of these systems or pessimistic critique of their technical limitations and general ‘stupidity’. Surprisingly, these debates have largely overlooked the exegetical capacities of these systems, which for many users seem to be producing meaningful texts. In this paper, we take an interdisciplinary approach that combines hermeneutics—the study of meaning and interpretation—with prompt engineering—task descriptions embedded in input to NLG systems—to study the extent to which a specific NLG system, ChatGPT, produces texts of hermeneutic value. We design prompts with the goal of optimising hermeneuticity rather than mere factual accuracy, and apply them in four different use cases combining humans and ChatGPT as readers and writers. In most cases, ChatGPT produces readable texts that respond clearly to our requests. However, increasing the specificity of prompts’ task descriptions leads to texts with intensified neutrality, indicating that ChatGPT’s optimisation for factual accuracy may actually be detrimental to the hermeneuticity of its output.
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Human-Computer Interaction,Philosophy
Reference50 articles.
1. Alexander S (2022) A guide to asking robots to design stained glass windows. Astral Codex Ten. https://astralcodexten.substack.com/p/a-guide-to-asking-robots-to-design. Accessed 26 Oct 2022
2. Bender EM, Gebru T, McMillan-Major A, Shmitchell S (2021) On the dangers of stochastic parrots: can language models be too big? In: FAccT '21: Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. ACM, New York, pp 610–623. https://doi.org/10.1145/3442188.3445922
3. Chomsky N, Roberts I, Watumull J (2023) Noam Chomsky: the false promise of ChatGPT. New York Times. https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html. Accessed 15 Mar 2023
4. Christiansen J (2011) Sloth—a tool for checking minimal-strictness. In: PADL 2022: international symposium on practical aspects of declarative languages. Springer, Berlin, pp 160–174. https://doi.org/10.1007/978-3-642-18378-2_14
5. Cobley P, Sibers J (2021) Close reading and distant: between invariance and a rhetoric of embodiment. Lang Sci 84:101359. https://doi.org/10.1016/j.langsci.2021.101359
Cited by
20 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献