Author:
Afreen Najeeba,Aluvalu Rajanikanth
Abstract
INTRODUCTION: Glaucoma is an incurable eye syndrome and the second leading reason of vision loss. A retinal scan is usually used to detect it. Glaucoma poses a challenge to predict in its nascent stages because the side effects of glaucoma are not recognized until the advanced stages of the disease are reached. Therefore, regular eye examinations are important and recommended. Manual glaucoma screening methods are labour-intensive and time-consuming processes. However, deep learning-based glaucoma detection methods reduce the need for manual work and improve accuracy and speed.
OBJECTIVES: conduct a literature analysis of latest technical publications using various AI, Machine learning, and Deep learning methodologies for automated glaucoma detection.
RESULTS: There are 329 Scopus articles on glaucoma detection using retinal images. The quantitative review presented state-of-art methods from different research publications and articles and the usage of a fundus image database for qualitative and quantitative analysis. This paper presents the execution of Explainable AI for Glaucoma prediction Analysis. Explainable AI (XAI) is artificial intelligence (AI) that allows humans to understand AI decisions and predictions. This contrasts with the machine learning “black box” concept, where even the designer cannot explain why the AI made certain decisions. XAI is committed to improving user performance. To provide reliable explanations for Glaucoma forecasting from unhealthy and diseased photos, XAI primarily employs an Adaptive Neuro-fuzzy Inference System (ANFIS).
CONCLUSION: This article proposes and compares the performance metrics of ANFIS & SNN fuzzy layers, VGG19, AlexNet, ResNet, and MobileNet.
Publisher
European Alliance for Innovation n.o.
Reference74 articles.
1. Zolanvari, M., Yang, Z., Khan, K., Jain, R., & Meskin, N. (2021). Trust xai: Model-agnostic explanations for ai with a case study on iiot security. IEEE Internet of Things Journal.
2. Tjoa, E., & Guan, C. (2020). A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE transactions on neural networks and learning systems, 32(11), 4793-4813.
3. Caroprese, L., Vocaturo, E., & Zumpano, E. (2022). Argumentation approaches for explainable AI in medical informatics. Intelligent Systems with Applications, 16, 200109.
4. Čyras, K., Rago, A., Albini, E., Baroni, P., & Toni, F. (2021). Argumentative XAI: a survey. arXiv preprint arXiv:2105.11266.
5. Siddiqui, K., & Doyle, T. E. (2022, September). Trust Metrics for Medical Deep Learning Using Explainable-AI Ensemble for Time Series Classification. In 2022 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE) (pp. 370-377). IEEE.