Author:
Araújo Vinícius,Marinho Leandro
Abstract
It has been advocated that post-hoc explanation techniques are crucial for increasing the trust in complex Machine Learning (ML) models. However, it is so far not well understood whether such explanation techniques are useful or easy for users to understand. In this work, we explore the extent to which SHAP’s explanations, a state-of-the-art post-hoc explainer, help humans to make better decisions. In the malaria classification scenario, we have designed an experiment with 120 volunteers to understand whether humans, starting with zero knowledge about the classification mechanism, could replicate the complex ML classifier’s performance after having access to the model explanations. Our results show that this is indeed the case, i.e., when presented with the ML model outcomes and the explanations, humans can improve their classification performance, indicating that they understood how the ML model makes its decisions.
Publisher
Sociedade Brasileira de Computação
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献