Author:
Mostafa Sakib,Mondal Debajyoti,Panjvani Karim,Kochian Leon,Stavness Ian
Abstract
The increasing human population and variable weather conditions, due to climate change, pose a threat to the world's food security. To improve global food security, we need to provide breeders with tools to develop crop cultivars that are more resilient to extreme weather conditions and provide growers with tools to more effectively manage biotic and abiotic stresses in their crops. Plant phenotyping, the measurement of a plant's structural and functional characteristics, has the potential to inform, improve and accelerate both breeders' selections and growers' management decisions. To improve the speed, reliability and scale of plant phenotyping procedures, many researchers have adopted deep learning methods to estimate phenotypic information from images of plants and crops. Despite the successful results of these image-based phenotyping studies, the representations learned by deep learning models remain difficult to interpret, understand, and explain. For this reason, deep learning models are still considered to be black boxes. Explainable AI (XAI) is a promising approach for opening the deep learning model's black box and providing plant scientists with image-based phenotypic information that is interpretable and trustworthy. Although various fields of study have adopted XAI to advance their understanding of deep learning models, it has yet to be well-studied in the context of plant phenotyping research. In this review article, we reviewed existing XAI studies in plant shoot phenotyping, as well as related domains, to help plant researchers understand the benefits of XAI and make it easier for them to integrate XAI into their future studies. An elucidation of the representations within a deep learning model can help researchers explain the model's decisions, relate the features detected by the model to the underlying plant physiology, and enhance the trustworthiness of image-based phenotypic information used in food production systems.
Reference234 articles.
1. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI);Adadi;IEEE Access,2018
2. Local explanation methods for deep neural networks lack sensitivity to parameter values;Adebayo;arXiv:1810.03307v1
3. Sanity checks for saliency maps;Adebayo;Adv. Neural Inf. Process. Syst
4. Neural additive models: interpretable machine learning with neural nets46994711
AgarwalR.
MelnickL.
FrosstN.
ZhangX.
LengerichB.
CaruanaR.
36850392Adv. Neural Inf. Process. Syst342021
5. “Leaf counting with deep convolutional and deconvolutional networks,”;Aich,2017
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献