Author:
Lysov Maxim,Maximova Irina,Vasiliev Evgeny,Getmanskaya Alexandra,Turlapov Vadim
Abstract
This article is devoted to searching for high-level explainable features that can remain explainable for a wide class of objects or phenomena and become an integral part of explainable AI (XAI). The present study involved a 25-day experiment on early diagnosis of wheat stress using drought stress as an example. The state of the plants was periodically monitored via thermal infrared (TIR) and hyperspectral image (HSI) cameras. A single-layer perceptron (SLP)-based classifier was used as the main instrument in the XAI study. To provide explainability of the SLP input, the direct HSI was replaced by images of six popular vegetation indices and three HSI channels (R630, G550, and B480; referred to as indices), along with the TIR image. Furthermore, in the explainability analysis, each of the 10 images was replaced by its 6 statistical features: min, max, mean, std, max–min, and the entropy. For the SLP output explainability, seven output neurons corresponding to the key states of the plants were chosen. The inner layer of the SLP was constructed using 15 neurons, including 10 corresponding to the indices and 5 reserved neurons. The classification possibilities of all 60 features and 10 indices of the SLP classifier were studied. Study result: Entropy is the earliest high-level stress feature for all indices; entropy and an entropy-like feature (max–min) paired with one of the other statistical features can provide, for most indices, 100% accuracy (or near 100%), serving as an integral part of XAI.
Funder
Ministry of Science and Higher Education of the Russian Federation
Subject
General Physics and Astronomy
Reference30 articles.
1. Explaining prediction models and individual predictions with feature contributions;Štrumbelj;Knowl. Inf. Syst.,2013
2. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. Intriguing properties of neural networks. Proceedings of the International Conference on Learning Representations.
3. Variable importance analysis: A comprehensive review;Wei;Reliab. Eng. Syst. Saf.,2015
4. Gorban, A.N., Makarov, V.A., and Tyukin, I.Y. High-Dimensional Brain in a High-Dimensional World: Blessing of Dimensionality. Entropy, 2020. 22.
5. Hastie, T., Tibshirani, R., and Friedman, J. The Elements of Statistical Learning. Data Mining, Reference, and Prediction, 2001.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献