Explainability of Automated Fact Verification Systems: A Comprehensive Review
-
Published:2023-11-23
Issue:23
Volume:13
Page:12608
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Vallayil Manju1ORCID, Nand Parma1ORCID, Yan Wei Qi1ORCID, Allende-Cid Héctor23ORCID
Affiliation:
1. School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland 1010, New Zealand 2. Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, Chile 3. Knowledge Discovery, Fraunhofer-Institute of Intelligent Analysis and Information Systems (IAIS), 53757 Sankt Augustin, Germany
Abstract
The rapid growth in Artificial Intelligence (AI) has led to considerable progress in Automated Fact Verification (AFV). This process involves collecting evidence for a statement, assessing its relevance, and predicting its accuracy. Recently, research has begun to explore automatic explanations as an integral part of the accuracy analysis process. However, the explainability within AFV is lagging compared to the wider field of explainable AI (XAI), which aims at making AI decisions more transparent. This study looks at the notion of explainability as a topic in the field of XAI, with a focus on how it applies to the specific task of Automated Fact Verification. It examines the explainability of AFV, taking into account architectural, methodological, and dataset-related elements, with the aim of making AI more comprehensible and acceptable to general society. Although there is a general consensus on the need for AI systems to be explainable, there a dearth of systems and processes to achieve it. This research investigates the concept of explainable AI in general and demonstrates its various aspects through the particular task of Automated Fact Verification. This study explores the topic of faithfulness in the context of local and global explainability. This paper concludes by highlighting the gaps and limitations in current data science practices and possible recommendations for modifications to architectural and data curation processes, contributing to the broader goals of explainability in Automated Fact Verification.
Funder
Federal Ministry of Education and Research of Germany and the state of North-Rhine Westphalia as part of the Lamarr-Institute for Machine Learning and Artificial Intelligence Auckland University of Technology
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference58 articles.
1. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., and Polosukhin, I. (2017, January 4–9). Attention is all you need. Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence;Ali;Inf. Fusion,2023 3. A Survey on Automated Fact-Checking;Guo;Trans. Assoc. Comput. Linguist.,2022 4. Du, Y., Bosselut, A., and Manning, C.D. (March, January 22). Synthetic Disinformation Attacks on Automated Fact Verification Systems. Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI-22), Virtually. 5. ClaimBuster: The First-Ever End-to-End Fact-Checking System;Hassan;Proc. VLDB Endow.,2017
|
|