Abstract
AbstractIn machine learning (ML), association patterns in the data, paths in decision trees, and weights between layers of the neural network are often entangled due to multiple underlying causes, thus masking the pattern-to-source relation, weakening prediction, and defying explanation. This paper presents a revolutionary ML paradigm: pattern discovery and disentanglement (PDD) that disentangles associations and provides an all-in-one knowledge system capable of (a) disentangling patterns to associate with distinct primary sources; (b) discovering rare/imbalanced groups, detecting anomalies and rectifying discrepancies to improve class association, pattern and entity clustering; and (c) organizing knowledge for statistically supported interpretability for causal exploration. Results from case studies have validated such capabilities. The explainable knowledge reveals pattern-source relations on entities, and underlying factors for causal inference, and clinical study and practice; thus, addressing the major concern of interpretability, trust, and reliability when applying ML to healthcare, which is a step towards closing the AI chasm.
Funder
Canadian Network for Research and Innovation in Machining Technology, Natural Sciences and Engineering Research Council of Canada
Publisher
Springer Science and Business Media LLC
Subject
Health Information Management,Health Informatics,Computer Science Applications,Medicine (miscellaneous)
Reference33 articles.
1. Sambasivan, N. et al. Everyone wants to do the model work, not the data work: Data Cascades in High-Stakes AI, in Proc. 2021 CHI Conference on Human Factors in Computing Systems (2021).
2. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K. & Galstyan, A. A survey on bias and fairness in machine learning. ACM Comput. Surv. 54, 1–35 (2021).
3. Danilevsky, M. et al. A survey of the state of explainable AI for natural language processing. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, Suzhou, China. Association for Computational Linguistics. pp. 447–459.
4. Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215 (2019).
5. Hangartner, D., Kopp, D. & Siegenthaler, M. Monitoring hiring discrimination through online recruitment platforms. Nature 589, 572–576 (2021).
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献