Abstract
AbstractScientists must understand what machines do (systems should not behave like a black box), because in many cases how they predict is more important than what they predict. In this work, we propose a new extension of the fuzzy linguistic grammar and a mainly novel interpretable linear extension for regression problems, together with an enhanced new linguistic tree-based evolutionary multiobjective learning approach. This allows the general behavior of the data covered, as well as their specific variability, to be expressed as a single rule. In order to ensure the highest transparency and accuracy values, this learning process maximizes two widely accepted semantic metrics and also minimizes both the number of rules and the model mean squared error. The results obtained in 23 regression datasets show the effectiveness of the proposed method by applying statistical tests to the said metrics, which cover the different aspects of the interpretability of linguistic fuzzy models. This learning process has obtained the preservation of high-level semantics and less than 5 rules on average, while it still clearly outperforms some of the previous state-of-the-art linguistic fuzzy regression methods for learning interpretable regression linguistic fuzzy systems, and even to a competitive, pure accuracy-oriented linguistic learning approach. Finally, we analyze a case study in a real problem related to childhood obesity, and a real expert carries out the analysis shown.
Funder
erdf/regional government of andalusia/ministry of economic transformation, industry, knowledge and universities
erdf/health institute carlos iii/spanish ministry of science, innovation and universities
spanish ministry of economy and competitiveness
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Computational Theory and Mathematics,Theoretical Computer Science,Software
Reference47 articles.
1. Castelvecchi, D.: Can we open the black box of AI? Nature 538, 20–23 (2016)
2. Knight, W.: The U.S. military wants its autonomous machines to explain themselves. MIT Technol. Rev. 1, 16 (2017)
3. Gadd, S.: Computer system could kill rather than cure, doctors warn, The Copenhagen Post (2017). http://cphpost.dk/?p=92249
4. Montavon, G., Samek, W., Müller, K.-R.: Methods for interpreting and understanding deep neural networks. Digital Signal Process. 73, 1–15 (2018). https://doi.org/10.1016/j.dsp.2017.10.011
5. Greene, D., Lauren Hoffmann, A., Stark, L.: Better, nicer, clearer, fairer: a critical assessment of the movement for ethical artificial intelligence and machine learning. In: Proceedings of the 52nd Hawaii International Conference on System Sciences, Grand Wailea, Maui, Hawaii, 2019, pp. 2122–2131. https://doi.org/10.24251/HICSS.2019.258
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献