Author:
Pfeuffer Nicolas,Baum Lorenz,Stammer Wolfgang,Abdel-Karim Benjamin M.,Schramowski Patrick,Bucher Andreas M.,Hügel Christian,Rohde Gernot,Kersting Kristian,Hinz Oliver
Abstract
AbstractThe most promising standard machine learning methods can deliver highly accurate classification results, often outperforming standard white-box methods. However, it is hardly possible for humans to fully understand the rationale behind the black-box results, and thus, these powerful methods hamper the creation of new knowledge on the part of humans and the broader acceptance of this technology. Explainable Artificial Intelligence attempts to overcome this problem by making the results more interpretable, while Interactive Machine Learning integrates humans into the process of insight discovery. The paper builds on recent successes in combining these two cutting-edge technologies and proposes how Explanatory Interactive Machine Learning (XIL) is embedded in a generalizable Action Design Research (ADR) process – called XIL-ADR. This approach can be used to analyze data, inspect models, and iteratively improve them. The paper shows the application of this process using the diagnosis of viral pneumonia, e.g., Covid-19, as an illustrative example. By these means, the paper also illustrates how XIL-ADR can help identify shortcomings of standard machine learning projects, gain new insights on the part of the human user, and thereby can help to unlock the full potential of AI-based systems for organizations and research.
Funder
Johann Wolfgang Goethe-Universität, Frankfurt am Main
Publisher
Springer Science and Business Media LLC
Reference93 articles.
1. Abdel-Karim BM, Pfeuffer N, Rohde G, Hinz O (2020) How and what can humans learn from being in the loop? Künstl Intell 34:199–207. https://doi.org/10.1007/s13218-020-00638-x
2. Adebayo J, Gilmer J, Muelly M, Goodfellow I, Hardt M, Kim B (2018) Sanity checks for saliency maps. In: Bengio S, Wallach H, Larochelle H, Grauman K, Cesa-Bianchi N, Garnett R (eds) Advances in neural information processing systems, vol 31. Curran Associates, Inc
3. Adebayo J, Muelly M, Liccardi I, Kim B (2020) Debugging tests for model explanations. In: Larochelle H, Ranzato M, Hadsell R, Balcan MF, Lin H (eds) Advances in neural information processing systems, vol 33. Curran Associates, Inc, pp 700–712
4. ALQahtani DA, Rotgans JI, Mamede S, Mahzari MM, Al-Ghamdi GA, Schmidt HG (2018) Factors underlying suboptimal diagnostic performance in physicians under time pressure. Med Educ 52:1288–1298. https://doi.org/10.1111/medu.13686
5. Amershi S, Cakmak M, Knox WB, Kulesza T (2015) Power to the people: the role of humans in interactive machine learning. Ai Mag 35:105–120. https://doi.org/10.1609/aimag.v35i4.2513
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献