Explainable AI for Bioinformatics: Methods, Tools and Applications

Author:

Karim Md Rezaul12ORCID,Islam Tanhim3ORCID,Shajalal Md4ORCID,Beyan Oya15ORCID,Lange Christoph12ORCID,Cochez Michael67ORCID,Rebholz-Schuhmann Dietrich89ORCID,Decker Stefan12ORCID

Affiliation:

1. Computer Science 5 - Information Systems and Databases , RWTH Aachen University , Germany

2. Department of Data Science and Artificial Intelligence , Fraunhofer FIT , Germany

3. Computer Science 9 - Process and Data Science , RWTH Aachen University , Germany

4. University of Siegen , Germany

5. University of Cologne, Faculty of Medicine and University Hospital Cologne, Institute for Medical Informatics , Germany

6. Department of Computer Science, Vrije Universiteit Amsterdam , the Netherlands

7. Elsevier Discovery Lab , Amsterdam, the Netherlands

8. ZBMED - Information Center for Life Sciences , Cologne , Germany

9. Faculty of Medicine, University of Cologne , Germany

Abstract

Abstract Artificial intelligence (AI) systems utilizing deep neural networks and machine learning (ML) algorithms are widely used for solving critical problems in bioinformatics, biomedical informatics and precision medicine. However, complex ML models that are often perceived as opaque and black-box methods make it difficult to understand the reasoning behind their decisions. This lack of transparency can be a challenge for both end-users and decision-makers, as well as AI developers. In sensitive areas such as healthcare, explainability and accountability are not only desirable properties but also legally required for AI systems that can have a significant impact on human lives. Fairness is another growing concern, as algorithmic decisions should not show bias or discrimination towards certain groups or individuals based on sensitive attributes. Explainable AI (XAI) aims to overcome the opaqueness of black-box models and to provide transparency in how AI systems make decisions. Interpretable ML models can explain how they make predictions and identify factors that influence their outcomes. However, the majority of the state-of-the-art interpretable ML methods are domain-agnostic and have evolved from fields such as computer vision, automated reasoning or statistics, making direct application to bioinformatics problems challenging without customization and domain adaptation. In this paper, we discuss the importance of explainability and algorithmic transparency in the context of bioinformatics. We provide an overview of model-specific and model-agnostic interpretable ML methods and tools and outline their potential limitations. We discuss how existing interpretable ML methods can be customized and fit to bioinformatics research problems. Further, through case studies in bioimaging, cancer genomics and text mining, we demonstrate how XAI methods can improve transparency and decision fairness. Our review aims at providing valuable insights and serving as a starting point for researchers wanting to enhance explainability and decision transparency while solving bioinformatics problems. GitHub: https://github.com/rezacsedu/XAI-for-bioinformatics.

Funder

Horizon Europe Research and Innovation Program

Publisher

Oxford University Press (OUP)

Subject

Molecular Biology,Information Systems

Cited by 5 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Defining the boundaries: challenges and advances in identifying cells in microscopy images;Current Opinion in Biotechnology;2024-02

2. Toxicogenomics Approaches to Address Toxicity and Carcinogenicity in the Liver;Toxicologic Pathology;2024-01-30

3. Explainable AI Evaluation: A Top-Down Approach for Selecting Optimal Explanations for Black Box Models;Information;2023-12-20

4. Artificial Intelligence Techniques in Bioinformatics: Unravelling Complex Biological Systems;International Journal of Advanced Research in Science, Communication and Technology;2023-12-06

5. Interpreting Black-box Machine Learning Models for High Dimensional Datasets;2023 IEEE 10th International Conference on Data Science and Advanced Analytics (DSAA);2023-10-09

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3