A Practical tutorial on Explainable AI Techniques

Author:

Bennetot Adrien1ORCID,Donadello Ivan2ORCID,El Qadi El Haouari Ayoub34ORCID,Dragoni Mauro5ORCID,Frossard Thomas4ORCID,Wagner Benedikt6ORCID,Sarranti Anna7ORCID,Tulli Silvia3ORCID,Trocan Maria8ORCID,Chatila Raja3ORCID,Holzinger Andreas79ORCID,d'Avila Garcez Artur10ORCID,Díaz-Rodríguez Natalia11ORCID

Affiliation:

1. Sorbonne Université, Paris, France

2. Free University of Bozen-Bolzano, Bolzano, Italy

3. Sorbonne Universite, Paris, France

4. Tinubu Square, Paris France

5. Fondazione Bruno Kessler, Trento Italy

6. City University of London, London, United Kingdom of Great Britain and Northern Ireland

7. University of Natural Resources and Life Sciences Vienna, Wien, Austria

8. Institut Supérieur d'Électronique de Paris (ISEP), Paris France

9. Medical University Graz Centre-Independent Institutes, Graz, Austria

10. City University, London, United Kingdom of Great Britain and Northern Ireland

11. University of Granada, Granada, Spain

Abstract

The past years have been characterized by an upsurge in opaque automatic decision support systems, such as Deep Neural Networks (DNNs). Although DNNs have great generalization and prediction abilities, it is difficult to obtain detailed explanations for their behaviour. As opaque Machine Learning models are increasingly being employed to make important predictions in critical domains, there is a danger of creating and using decisions that are not justifiable or legitimate. Therefore, there is a general agreement on the importance of endowing DNNs with explainability. EXplainable Artificial Intelligence (XAI) techniques can serve to verify and certify model outputs and enhance them with desirable notions such as trustworthiness, accountability, transparency and fairness. This guide is intended to be the go-to handbook for anyone with a computer science background aiming to obtain an intuitive insight from Machine Learning models accompanied by explanations out-of-the-box. The article aims to rectify the lack of a practical XAI guide by applying XAI techniques in particular day-to-day models, datasets and use-cases. In each chapter, the reader will find a description of the proposed method as well as one or several examples of use with Python notebooks. These can be easily modified in order to be applied to specific applications. We also explain what the prerequisites are for using each technique, what the user will learn about them, and which tasks they are aimed at.

Publisher

Association for Computing Machinery (ACM)

Reference108 articles.

1. A systematic review of Explainable Artificial Intelligence models and applications: Recent developments and future trends

2. Zeina Abu-Aisheh, Romain Raveaux, Jean-Yves Ramel, and Patrick Martineau. 2015. An exact graph edit distance algorithm for solving pattern recognition problems. In 4th International Conference on Pattern Recognition Applications and Methods 2015.

3. Julius Adebayo Justin Gilmer Michael Muelly Ian Goodfellow Moritz Hardt and Been Kim. 2018. Sanity checks for saliency maps. In Advances in Neural Information Processing Systems. 9505–9515.

4. Neural networks and deep learning; al C Aggarwal;Springer,2018

5. Sajid Ali Tamer Abuhmed Shaker El-Sappagh Khan Muhammad Jose M Alonso-Moral Roberto Confalonieri Riccardo Guidotti Javier Del Ser Natalia Díaz-Rodríguez and Francisco Herrera. 2023. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information fusion 99(2023) 101805.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3