Explainability and causability in digital pathology

Author:

Plass Markus1ORCID,Kargl Michaela1,Kiehl Tim‐Rasmus2,Regitnig Peter1,Geißler Christian3,Evans Theodore3,Zerbe Norman2,Carvalho Rita2,Holzinger Andreas14ORCID,Müller Heimo1ORCID

Affiliation:

1. Diagnostic and Research Institute of Pathology Medical University of Graz Graz Austria

2. Charité‐Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt‐Universität zu Berlin Institute of Pathology Berlin Germany

3. DAI‐Labor, Agent Oriented Technologies (AOT) Technische Universität Berlin Berlin Germany

4. Human‐Centered AI Lab University of Natural Resources and Life Sciences Vienna Vienna Austria

Abstract

AbstractThe current move towards digital pathology enables pathologists to use artificial intelligence (AI)‐based computer programmes for the advanced analysis of whole slide images. However, currently, the best‐performing AI algorithms for image analysis are deemed black boxes since it remains – even to their developers – often unclear why the algorithm delivered a particular result. Especially in medicine, a better understanding of algorithmic decisions is essential to avoid mistakes and adverse effects on patients. This review article aims to provide medical experts with insights on the issue of explainability in digital pathology. A short introduction to the relevant underlying core concepts of machine learning shall nurture the reader's understanding of why explainability is a specific issue in this field. Addressing this issue of explainability, the rapidly evolving research field of explainable AI (XAI) has developed many techniques and methods to make black‐box machine‐learning systems more transparent. These XAI methods are a first step towards making black‐box AI systems understandable by humans. However, we argue that an explanation interface must complement these explainable models to make their results useful to human stakeholders and achieve a high level of causability, i.e. a high level of causal understanding by the user. This is especially relevant in the medical field since explainability and causability play a crucial role also for compliance with regulatory requirements. We conclude by promoting the need for novel user interfaces for AI applications in pathology, which enable contextual understanding and allow the medical expert to ask interactive ‘what‐if’‐questions. In pathology, such user interfaces will not only be important to achieve a high level of causability. They will also be crucial for keeping the human‐in‐the‐loop and bringing medical experts' experience and conceptual knowledge to AI processes.

Funder

Austrian Science Fund

Horizon 2020 Framework Programme

Österreichische Forschungsförderungsgesellschaft

Publisher

Wiley

Subject

Pathology and Forensic Medicine

Cited by 24 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3