Abstract
AbstractWe analyze a case study in the field of smart agriculture exploiting Explainable AI (XAI) approach, a field of study that aims to provide interpretations and explanations to the behaviour of AI systems. The study regards a multiclass classification problem on the Crop Recommendation dataset. The original task is the prediction of the most adequate crop, according to seven features. In addition to the predictions, two of the most well-known XAI approaches have been used in order to obtain explanations and interpretations of the behaviour of the models: SHAP (SHapley Additive ExPlanations), and LIME (Local Interpretable Model-Agnostic Explanations). Both packages provide easy-to-understand visualizations that allow common users to understand explanations of single predictions even without going into the mathematical details of the algorithms. Within the scientific community criticisms have been raised against these approaches, and recently some papers brought to light some weaknesses. However, the two algorithms are among the most popular in XAI and are still considered points of reference for this field of study.
Funder
Università della Calabria
Publisher
Springer Science and Business Media LLC
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. An Explainable Smart Agriculture System based on In- Vivo Biosensors;2024 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE);2024-06-30
2. Leveraging Incremental Decision Trees and In-Vivo Biosensors for an Explainable Plant Health Monitoring System;2024 IEEE International Conference on Evolving and Adaptive Intelligent Systems (EAIS);2024-05-23