Abstract
AbstractLarge language models (LLMs) represent a major advance in artificial intelligence (AI) research. However, the widespread use of LLMs is also coupled with significant ethical and social challenges. Previous research has pointed towards auditing as a promising governance mechanism to help ensure that AI systems are designed and deployed in ways that are ethical, legal, and technically robust. However, existing auditing procedures fail to address the governance challenges posed by LLMs, which display emergent capabilities and are adaptable to a wide range of downstream tasks. In this article, we address that gap by outlining a novel blueprint for how to audit LLMs. Specifically, we propose a three-layered approach, whereby governance audits (of technology providers that design and disseminate LLMs), model audits (of LLMs after pre-training but prior to their release), and application audits (of applications based on LLMs) complement and inform each other. We show how audits, when conducted in a structured and coordinated manner on all three levels, can be a feasible and effective mechanism for identifying and managing some of the ethical and social risks posed by LLMs. However, it is important to remain realistic about what auditing can reasonably be expected to achieve. Therefore, we discuss the limitations not only of our three-layered approach but also of the prospect of auditing LLMs at all. Ultimately, this article seeks to expand the methodological toolkit available to technology providers and policymakers who wish to analyse and evaluate LLMs from technical, ethical, and legal perspectives.
Funder
AstraZeneca
The Centre for the Governance of AI
Publisher
Springer Science and Business Media LLC
Subject
General Earth and Planetary Sciences
Reference302 articles.
1. Sandvig, C., Hamilton, K., Karahalios, K., Langbort, C.: Auditing algorithms. In: ICA 2014 Data and Discrimination Preconference, pp. 1–23 (2014). https://doi.org/10.1109/DEXA.2009.55
2. Diakopoulos, N.: Algorithmic accountability: journalistic investigation of computational power structures. Digit. J. 3(3), 398–415 (2015). https://doi.org/10.1080/21670811.2014.976411
3. Mökander, J., Floridi, L.: Ethics—based auditing to develop trustworthy AI. Minds Mach. (Dordr) 0123456789, 2–6 (2021). https://doi.org/10.1007/s11023-021-09557-8
4. Brundage, M., et al.: Toward trustworthy AI development: mechanisms for supporting verifiable claims. ArXiv, no. 2004.07213[cs.CY])., 2020, [Online]. http://arxiv.org/abs/2004.07213
5. Raji, I.D., Buolamwini, J.: Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. In: AIES 2019—Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 429–435, (2019). https://doi.org/10.1145/3306618.3314244
Cited by
39 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献