Leveraging explanations in interactive machine learning: An overview

Author:

Teso Stefano,Alkan Öznur,Stammer Wolfgang,Daly Elizabeth

Abstract

Explanations have gained an increasing level of interest in the AI and Machine Learning (ML) communities in order to improve model transparency and allow users to form a mental model of a trained ML model. However, explanations can go beyond this one way communication as a mechanism to elicit user control, because once users understand, they can then provide feedback. The goal of this paper is to present an overview of research where explanations are combined with interactive capabilities as a mean to learn new models from scratch and to edit and debug existing ones. To this end, we draw a conceptual map of the state-of-the-art, grouping relevant approaches based on their intended purpose and on how they structure the interaction, highlighting similarities and differences between them. We also discuss open research issues and outline possible directions forward, with the hope of spurring further research on this blooming research topic.

Publisher

Frontiers Media SA

Subject

Artificial Intelligence

Reference203 articles.

1. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI);Adadi;IEEE Access,2018

2. “Sanity checks for saliency maps,”;Adebayo,2018

3. “Debugging tests for model explanations,”;Adebayo,2020

4. Beneficial and harmful explanatory machine learning;Ai;Mach. Learn,2021

5. “Demystifying black-box models with symbolic metamodels,”;Alaa,2019

Cited by 19 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Towards a neuro-symbolic cycle for human-centered explainability;Neurosymbolic Artificial Intelligence;2024-08-28

2. Unpacking Human-AI interactions: From Interaction Primitives to a Design Space;ACM Transactions on Interactive Intelligent Systems;2024-08-02

3. Representation Debiasing of Generated Data Involving Domain Experts;Adjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization;2024-06-27

4. An Explanatory Model Steering System for Collaboration between Domain Experts and AI;Adjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization;2024-06-27

5. Towards Directive Explanations: Crafting Explainable AI Systems for Actionable Human-AI Interactions;Extended Abstracts of the CHI Conference on Human Factors in Computing Systems;2024-05-11

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3