Abstract
AbstractAs machine learning (ML) has emerged as the predominant technological paradigm for artificial intelligence (AI), complex black box models such as GPT-4 have gained widespread adoption. Concurrently, explainable AI (XAI) has risen in significance as a counterbalancing force. But the rapid expansion of this research domain has led to a proliferation of terminology and an array of diverse definitions, making it increasingly challenging to maintain coherence. This confusion of languages also stems from the plethora of different perspectives on XAI, e.g. ethics, law, standardization and computer science. This situation threatens to create a “tower of Babel” effect, whereby a multitude of languages impedes the establishment of a common (scientific) ground. In response, this paper first maps different vocabularies, used in ethics, law and standardization. It shows that despite a quest for standardized, uniform XAI definitions, there is still a confusion of languages. Drawing lessons from these viewpoints, it subsequently proposes a methodology for identifying a unified lexicon from a scientific standpoint. This could aid the scientific community in presenting a more unified front to better influence ongoing definition efforts in law and standardization, often without enough scientific representation, which will shape the nature of AI and XAI in the future.
Publisher
Springer Nature Switzerland
Reference60 articles.
1. AI Standards Hub: Output from workshop on ISO/IEC standards for AI transparency and explainability. https://aistandardshub.org/forums/topic/output-from-workshop-on-iso-iec-standards-for-ai-transparency/-and-explainability/
2. Beining, L.: Vertrauenswürdige KI durch Standards? (2020). https://www.stiftung-nv.de/sites/default/files/herausforderungen-standardisierung-ki.pdf
3. Bibal, A., Lognoul, M., de Streel, A., Frénay, B.: Legal requirements on explainability in machine learning. Artif. Intell. Law 29, 149–169 (2021). https://doi.org/10.1007/s10506-020-09270-4
4. Bomhard, D., Merkle, M.: Europäische KI-Verordnung. Recht Digit. 1(6), 276–283 (2021)
5. Bordt, S., Finck, M., Raidl, E., von Luxburg, U.: Post-hoc explanations fail to achieve their purpose in adversarial contexts. In: FAccT 2022: 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 891–905. ACM, New York (2022). https://doi.org/10.1145/3531146.3533153
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献