Affiliation:
1. Department of Artificial Intelligence Systems , Lviv Polytechnic National University , 5 Kniazia Romana St ., Lviv , Ukraine
2. Department of Applied Mathematics , University of Agriculture in Krakow , 21 Mickiewicza al ., Krakow , Poland
3. Institute of Mathematics of NAS of Ukraine , 3, Tereschenkivska st ., Kiyv-4 , Ukraine
Abstract
Abstract
This research focuses on the development of an explainable artificial intelligence (Explainable AI or XAI) system aimed at the analysis of medical data. Medical imaging and related datasets present inherent complexities due to their high-dimensional nature and the intricate biological patterns they represent. These complexities necessitate sophisticated computational models to decode and interpret, often leading to the employment of deep neural networks. However, while these models have achieved remarkable accuracy, their ”black-box” nature raises legitimate concerns regarding their interpretability and reliability in the clinical context.
To address this challenge, we can consider the following approaches: traditional statistical methods, a singular complex neural network, or an ensemble of simpler neural networks. Traditional statistical methods, though transparent, often lack the nuanced sensitivity required for the intricate patterns within medical images. On the other hand, a singular complex neural network, while powerful, can sometimes be too generalized, making specific interpretations challenging. Hence, our proposed strategy employs a hybrid system, combining multiple neural networks with distinct architectures, each tailored to address specific facets of the medical data interpretation challenges.
The key components of this proposed technology include a module for anomaly detection within medical images, a module for categorizing detected anomalies into specific medical conditions and a module for generating user-friendly, clinically-relevant interpretations.
Reference24 articles.
1. Lane T. (2018). A short history of robotic surgery. Annals of the Royal College of Surgeons of England, 100(6 sup), 5–7. https://doi.org/10.1308/rcsann.supp1.5
2. Liu P.-R., Lu L., Zhang J.-Y., Huo T.-T., Liu S.-X., & Ye Z.-W. (2021). Application of Artificial Intelligence in Medicine: An Overview. Current Medical Science, 41(6), 1105–1115. https://doi.org/10.1007/s11596-021-2474-3
3. Zhang Y., Weng Y., & Lund J. (2022). Applications of Explainable Artificial Intelligence in Diagnosis and Surgery. Diagnostics (Basel, Switzerland), 12(2), 237. https://doi.org/10.3390/diagnostics12020237
4. Ribeiro M. T., Singh S., & Guestrin C. (2016). ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier (arXiv:1602.04938). arXiv. http://arxiv.org/abs/1602.04938
5. Lundberg S. M., & Lee S.-I. (2017). A Unified Approach to Interpreting Model Predictions. Advances in Neural Information Processing Systems, 30. https://papers.nips.cc/paperfiles/paper/2017/hash/8a20a8621978632d76c4-3dfd28b67767-Abstract.html