Adversarial training improves model interpretability in single-cell RNA-seq analysis

Author:

Sadria Mehrshad1ORCID,Layton Anita1234,Bader Gary D56789

Affiliation:

1. Department of Applied Mathematics, University of Waterloo , Waterloo, Ontario N2L 3G1, Canada

2. Cheriton School of Computer Science, University of Waterloo , Waterloo, Ontario N2L 3G1, Canada

3. Department of Biology, University of Waterloo , Waterloo, Ontario N2L 3G1, Canada

4. School of Pharmacy, University of Waterloo , Waterloo, Ontario N2G 1C5, Canada

5. Department of Molecular Genetics, University of Toronto , Toronto, Ontario M5S 1A8, Canada

6. The Donnelly Centre, University of Toronto , Toronto, Ontario M5S 3E1, Canada

7. Department of Computer Science, University of Toronto , Toronto, Ontario M5S 2E4, Canada

8. The Lunenfeld-Tanenbaum Research Institute, Sinai Health System , Toronto, Ontario M5G 1X5, Canada

9. Princess Margaret Cancer Centre, University Health Network , Toronto, Ontario M5G 2M9, Canada

Abstract

Abstract Motivation Predictive computational models must be accurate, robust, and interpretable to be considered reliable in important areas such as biology and medicine. A sufficiently robust model should not have its output affected significantly by a slight change in the input. Also, these models should be able to explain how a decision is made to support user trust in the results. Efforts have been made to improve the robustness and interpretability of predictive computational models independently; however, the interaction of robustness and interpretability is poorly understood. Results As an example task, we explore the computational prediction of cell type based on single-cell RNA-seq data and show that it can be made more robust by adversarially training a deep learning model. Surprisingly, we find this also leads to improved model interpretability, as measured by identifying genes important for classification using a range of standard interpretability methods. Our results suggest that adversarial training may be generally useful to improve deep learning robustness and interpretability and that it should be evaluated on a range of tasks. Availability and implementation Our Python implementation of all analysis in this publication can be found at: https://github.com/MehrshadSD/robustness-interpretability. The analysis was conducted using numPy 0.2.5, pandas 2.0.3, scanpy 1.9.3, tensorflow 2.10.0, matplotlib 3.7.1, seaborn 0.12.2, sklearn 1.1.1, shap 0.42.0, lime 0.2.0.1, matplotlib_venn 0.11.9.

Funder

National Science and Engineering Research Council of Canada

Publisher

Oxford University Press (OUP)

Subject

Computer Science Applications,Genetics,Molecular Biology,Structural Biology

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3