Abstract
AbstractAutomated prediction systems based on machine learning (ML) are employed in practical applications with increasing frequency and stakeholders demand explanations of their decisions. ML algorithms that learn accurate sets of rules, such as learning classifier systems (LCSs), produce transparent and human-readable models by design. However, whether such models can be effectively used, both for predictions and analyses, strongly relies on the optimal placement and selection of rules (in ML this task is known as model selection). In this article, we broaden a previous analysis on a variety of techniques to efficiently place good rules within the search space based on their local prediction errors as well as their generality. This investigation is done within a specific pre-existing LCS, named SupRB, where the placement of rules and the selection of good subsets of rules are strictly separated—in contrast to other LCSs where these tasks sometimes blend. We compare two baselines, random search and $$(1, \lambda )$$
(
1
,
λ
)
-evolution strategy (ES), with six novelty search variants: three novelty-/fitness weighing variants and for each of those two differing approaches on the usage of the archiving mechanism. We find that random search is not sufficient and sensible criteria, i.e., error and generality, are indeed needed. However, we cannot confirm that the more complicated-to-explain novelty search variants would provide better results than $$(1, \lambda )$$
(
1
,
λ
)
-ES which allows a good balance between low error and low complexity in the resulting models.
Funder
Bayerische Staatsministerium für Wirtschaft, Landesentwicklung und Energie
Deutsche Forschungsgemeinschaft
Universität Augsburg
Publisher
Springer Science and Business Media LLC
Subject
Computer Science Applications,Computer Networks and Communications,Computer Graphics and Computer-Aided Design,Computational Theory and Mathematics,Artificial Intelligence,General Computer Science
Reference27 articles.
1. Heider M, Stegherr H, Nordsieck R, Hähner J. Learning classifier systems for self-explaining socio-technical-systems. arXiv Accepted for Publication in the Journal of Artificial Life (2022). https://doi.org/10.48550/ARXIV.2207.02300.
2. Urbanowicz RJ, Moore JH. Learning classifier systems: a complete introduction, review, and roadmap. J Artif Evol Appl. 2009.
3. Heider M, Pätzel D, Stegherr H, Hähner J. In: Eddaly M, Jarboui B, Siarry P, editors. A metaheuristic perspective on learning classifier systems. Springer, Singapore, 2023. p. 73–98. https://doi.org/10.1007/978-981-19-3888-7_3.
4. Heider M, Nordsieck R, Hähner J. Learning classifier systems for self-explaining socio-technical-systems. In: Stein A, Tomforde S, Botev J, Lewis P (eds) Proceedings of LIFELIKE 2021 Co-located with 2021 Conference on Artificial Life (ALIFE 2021) (2021). http://ceur-ws.org/Vol-3007/.
5. Heider M, Stegherr H, Wurth J, Sraj R, Hähner J. Separating rule discovery and global solution composition in a learning classifier system. In: Genetic and Evolutionary Computation Conference Companion (GECCO ’22 Companion) 2022. https://doi.org/10.1145/3520304.3529014.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献