Author:
Alufaisan Yasmeen,Marusich Laura R.,Bakdash Jonathan Z.,Zhou Yan,Kantarcioglu Murat
Abstract
Explainable AI provides insights to users into the why for
model predictions, offering potential for users to better understand
and trust a model, and to recognize and correct AI
predictions that are incorrect. Prior research on human and
explainable AI interactions has focused on measures such as
interpretability, trust, and usability of the explanation. There
are mixed findings whether explainable AI can improve actual
human decision-making and the ability to identify the
problems with the underlying model. Using real datasets, we
compare objective human decision accuracy without AI (control),
with an AI prediction (no explanation), and AI prediction
with explanation. We find providing any kind of AI prediction
tends to improve user decision accuracy, but no conclusive
evidence that explainable AI has a meaningful impact.
Moreover, we observed the strongest predictor for human decision
accuracy was AI accuracy and that users were somewhat
able to detect when the AI was correct vs. incorrect, but
this was not significantly affected by including an explanation.
Our results indicate that, at least in some situations, the
why information provided in explainable AI may not enhance
user decision-making, and further research may be needed to
understand how to integrate explainable AI into real systems.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
29 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献