A Meta Algorithm for Interpretable Ensemble Learning: The League of Experts

Author:

Vogel Richard1ORCID,Schlosser Tobias2ORCID,Manthey Robert1ORCID,Ritter Marc1ORCID,Vodel Matthias1ORCID,Eibl Maximilian3ORCID,Schneider Kristan Alexander4

Affiliation:

1. Media Informatics, University of Applied Sciences Mittweida, 09648 Mittweida, Germany

2. Media Computing, Chemnitz University of Technology, 09107 Chemnitz, Germany

3. Media Informatics, Chemnitz University of Technology, 09107 Chemnitz, Germany

4. Modeling and Simulation, University of Applied Sciences Mittweida, 09648 Mittweida, Germany

Abstract

Background. The importance of explainable artificial intelligence and machine learning (XAI/XML) is increasingly being recognized, aiming to understand how information contributes to decisions, the method’s bias, or sensitivity to data pathologies. Efforts are often directed to post hoc explanations of black box models. These approaches add additional sources for errors without resolving their shortcomings. Less effort is directed into the design of intrinsically interpretable approaches. Methods. We introduce an intrinsically interpretable methodology motivated by ensemble learning: the League of Experts (LoE) model. We establish the theoretical framework first and then deduce a modular meta algorithm. In our description, we focus primarily on classification problems. However, LoE applies equally to regression problems. Specific to classification problems, we employ classical decision trees as classifier ensembles as a particular instance. This choice facilitates the derivation of human-understandable decision rules for the underlying classification problem, which results in a derived rule learning system denoted as RuleLoE. Results. In addition to 12 KEEL classification datasets, we employ two standard datasets from particularly relevant domains—medicine and finance—to illustrate the LoE algorithm. The performance of LoE with respect to its accuracy and rule coverage is comparable to common state-of-the-art classification methods. Moreover, LoE delivers a clearly understandable set of decision rules with adjustable complexity, describing the classification problem. Conclusions. LoE is a reliable method for classification and regression problems with an accuracy that seems to be appropriate for situations in which underlying causalities are in the center of interest rather than just accurate predictions or classifications.

Funder

European Social Fund

the Free State of Saxony, Germany

Publisher

MDPI AG

Reference61 articles.

1. Maurer, M., Gerdes, J.C., Lenz, B., and Winner, H. (2016). Autonomous Driving, Springer.

2. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: Methods of a decision-maker-researcher partnership systematic review;Haynes;Implement. Sci. IS,2010

3. Algorithmic prediction in policing: Assumptions, evaluation, and accountability;Chan;Polic. Soc.,2018

4. European Union regulations on algorithmic decision-making and a “right to explanation”;Goodman;AI Mag.,2017

5. The Judicial Demand for Explainable Artificial Intelligence;Deeks;Columbia Law Rev.,2019

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3