Toward reliable machine learning with Congruity: a quality measure based on formal concept analysis

Author:

De Maio Carmen,Fenza Giuseppe,Gallo Mariacristina,Loia VincenzoORCID,Stanzione Claudio

Abstract

AbstractThe spreading of machine learning (ML) and deep learning (DL) methods in different and critical application domains, like medicine and healthcare, introduces many opportunities but raises risks and opens ethical issues, mainly attaining to the lack of transparency. This contribution deals with the lack of transparency of ML and DL models focusing on the lack of trust in predictions and decisions generated. In this sense, this paper establishes a measure, namely Congruity, to provide information about the reliability of ML/DL model results. Congruity is defined by the lattice extracted through the formal concept analysis built on the training data. It measures how much the incoming data items are close to the ones used at the training stage of the ML and DL models. The general idea is that the reliability of trained model results is highly correlated with the similarity of input data and the training set. The objective of the paper is to demonstrate the correlation between the Congruity and the well-known Accuracy of the whole ML/DL model. Experimental results reveal that the value of correlation between Congruity and Accuracy of ML model is greater than 80% by varying ML models.

Funder

Università degli Studi di Salerno

Publisher

Springer Science and Business Media LLC

Subject

Artificial Intelligence,Software

Reference35 articles.

1. Commission E (2020) White paper on artificial intelligence-a European approach to excellence and trust. Com 65 Final (2020)

2. Andrade NNGd, Kontschieder V (2021) Ai impact assessment: A policy prototyping experiment. Available at SSRN 3772500

3. Goebel R, Chander A, Holzinger K, Lecue F, Akata Z, Stumpf S, Kieseberg P, Holzinger A (2018) Explainable AI: the new 42? In: International cross-domain conference for machine learning and knowledge extraction, pp 295–303. Springer

4. Gunning D, Aha D (2019) Darpa’s explainable artificial intelligence (XAI) program. AI Mag 40(2):44–58

5. Doran D, Schulz S, Besold TR (2017) What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3