Abstract
PurposeThis paper aims to investigate whether professional data analysts’ trust of black-box systems is increased by explainability artifacts.Design/methodology/approachThe study was developed in two phases. First a black-box prediction model was estimated using artificial neural networks, and local explainability artifacts were estimated using local interpretable model-agnostic explanations (LIME) algorithms. In the second phase, the model and explainability outcomes were presented to a sample of data analysts from the financial market and their trust of the models was measured. Finally, interviews were conducted in order to understand their perceptions regarding black-box models.FindingsThe data suggest that users’ trust of black-box systems is high and explainability artifacts do not influence this behavior. The interviews reveal that the nature and complexity of the problem a black-box model addresses influences the users’ perceptions, trust being reduced in situations that represent a threat (e.g. autonomous cars). Concerns about the models’ ethics were also mentioned by the interviewees.Research limitations/implicationsThe study considered a small sample of professional analysts from the financial market, which traditionally employs data analysis techniques for credit and risk analysis. Research with personnel in other sectors might reveal different perceptions.Originality/valueOther studies regarding trust in black-box models and explainability artifacts have focused on ordinary users, with little or no knowledge of data analysis. The present research focuses on expert users, which provides a different perspective and shows that, for them, trust is related to the quality of data and the nature of the problem being solved, as well as the practical consequences. Explanation of the algorithm mechanics itself is not significantly relevant.
Subject
Management of Technology and Innovation,Marketing,Business and International Management,Management Information Systems
Reference24 articles.
1. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI);IEEE Access,2018
2. Trust in automated systems (report),2003
3. Barocas, S., Friedler, S., Hardt, M., Kroll, J., Venka-Tasubramanian, & Wallach, H. (2018). The FAT-ML workshop series on fairness, accountability, and transparency in machine learning. available from: http://www.fatml.org/
4. Artificial intelligence, for real;Harvard Business Review,2017
5. Does projection into use improve trust and exploration? An example with a cruise control system;Safety Science,2009
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献