Affiliation:
1. VTT Technical Research Centre of Finland, 02150 Espoo, Finland
2. DFKI German Research Center for Artificial Intelligence, 67663 Kaiserslautern, Germany
3. Department of Computer Science, RPTU University Kaiserslautern-Landau, 67663 Kaiserslautern, Germany
Abstract
Information that is complicated and ambiguous entails high cognitive load. Trying to understand such information can involve a lot of cognitive effort. An alternative to expending a lot of cognitive effort is to engage in motivated cognition, which can involve selective attention to new information that matches existing beliefs. In accordance with principles of least action related to management of cognitive effort, another alternative is to give up trying to understand new information with high cognitive load. In either case, high cognitive load can limit potential for understanding of new information and learning from new information. Cognitive Load Theory (CLT) provides a framework for relating the characteristics of information to human cognitive load. Although CLT has been developed through more than three decades of scientific research, it has not been applied comprehensively to improve the explainability, transparency, interpretability, and shared interpretability (ETISI) of machine learning models and their outputs. Here, in order to illustrate the broad relevance of CLT to ETISI, it is applied to analyze a type of hybrid machine learning called Algebraic Machine Learning (AML). This is the example because AML has characteristics that offer high potential for ETISI. However, application of CLT reveals potential for high cognitive load that can limit ETISI even when AML is used in conjunction with decision trees. Following the AML example, the general relevance of CLT to machine learning ETISI is discussed with the examples of SHapley Additive exPlanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), and the Contextual Importance and Utility (CIU) method. Overall, it is argued in this Perspective paper that CLT can provide science-based design principles that can contribute to improving the ETISI of all types of machine learning.
Reference80 articles.
1. Explainable Machine Learning;Garcke;Mach. Learn. Knowl. Extr.,2023
2. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead;Rudin;Nat. Mach. Intell.,2019
3. Augmenting Deep Neural Networks with Symbolic Educational Knowledge: Towards Trustworthy and Interpretable AI for Education;Hooshyar;Mach. Learn. Knowl. Extr.,2024
4. A review of possible effects of cognitive biases on interpretation of rule-based machine learning models;Kliegr;Artif. Intell.,2022
5. O’Brien, K., Eriksen, S.E., Schjolden, A., and Nygaard, L.P. (2004). What’s in a Word? Conflicting Interpretations of Vulnerability in Climate Change Research, CICERO Center for International Climate and Environmental Research. CICERO Working Paper.