Provenance documentation to enable explainable and trustworthy AI: A literature review

Author:

Kale Amruta1,Nguyen Tin2,Harris Frederick C.2,Li Chenhao1,Zhang Jiyin1,Ma Xiaogang1

Affiliation:

1. Department of Computer Science, University of Idaho, Moscow, ID 83844, USA

2. Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV 89557, USA

Abstract

ABSTRACT Recently artificial intelligence (AI) and machine learning (ML) models have demonstrated remarkable progress with applications developed in various domains. It is also increasingly discussed that AI and ML models and applications should be transparent, explainable, and trustworthy. Accordingly, the field of Explainable AI (XAI) is expanding rapidly. XAI holds substantial promise for improving trust and transparency in AI-based systems by explaining how complex models such as the deep neural network (DNN) produces their outcomes. Moreover, many researchers and practitioners consider that using provenance to explain these complex models will help improve transparency in AI-based systems. In this paper, we conduct a systematic literature review of provenance, XAI, and trustworthy AI (TAI) to explain the fundamental concepts and illustrate the potential of using provenance as a medium to help accomplish explainability in AI-based systems. Moreover, we also discuss the patterns of recent developments in this area and offer a vision for research in the near future. We hope this literature review will serve as a starting point for scholars and practitioners interested in learning about essential components of provenance, XAI, and TAI.

Publisher

MIT Press

Subject

Artificial Intelligence,Library and Information Sciences,Computer Science Applications,Information Systems

Reference100 articles.

1. Ten research challenge areas in data science;Wing;Harvard Data Science Review,2020

2. European Union regulations on algorithmic decision-making and a “right to explanation”;Goodman;AI Magazine,2017

3. Can we open the black box of AI?;Castelvecchi;Nature News,2016

4. Peeking inside the black box: a survey on explainable artificial intelligence (XAI);Adadi;IEEE Access,2018

Cited by 10 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Data Management and Ontology Development for Provenance-Aware Organizations in Linked Data Space;European Journal of Technic;2023-12-26

2. Modeling of Path Loss for Radio Wave Propagation in Wireless Sensor Networks in Cassava Crops Using Machine Learning;Agriculture;2023-10-25

3. PINNProv: Provenance for Physics-Informed Neural Networks;2023 International Symposium on Computer Architecture and High Performance Computing Workshops (SBAC-PADW);2023-10-17

4. Enabling the Informed Patient Paradigm with Secure and Personalized Medical Question Answering;Proceedings of the 14th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics;2023-09-03

5. Geoweaver_cwl: Transforming geoweaver AI workflows to common workflow language to extend interoperability;Applied Computing and Geosciences;2023-09

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3