Affiliation:
1. The Ohio State University, Columbus, OH, USA
Abstract
We examined the impact of explainable artificial intelligence on trust in a highly-technical population applied to a high-risk domain. Specifically, we examined the effect of an example-based explainable machine learning system on trust for data analysts working for a pipeline inspection company. This study compared a baseline interface with no explanation to two example-based explainable interfaces. We found that showing examples from multiple classes significantly increased trust compared to the other interfaces. Also, enabling the user to override the ML agent’s decision is a bigger factor for trust for this technical population than the amount of explanation shown in the interface.
Subject
General Medicine,General Chemistry