Author:
Ma Danqing,Dang Bo,Li Shaojie,Zang Hengyi,Dong Xinqi
Abstract
As one of the branches of machine learning, the deep learning model combined with artificial intelligence is widely used in the field of computer vision technology, and the image recognition field represented by medical image analysis is also developing. Its advantage is that it does not rely on human annotation, and the computer can recognize and process the feature information omitted by human beings during the model training process, so as to achieve or even exceed the accuracy of human processing. Based on the general lack of explain ability caused by the unknown data processing process in the deep model, the existing solutions mainly include the establishment of internal explain ability, attention mechanism interpretation of specific models, and the interpretation of unknowable models represented by LIME. The way to quantitatively assess interpretability is still being explored, especially in the interpretative assessment of both doctors and patients in medical decision-related models, several scales have been proposed for reference. The current research on the application of artificial intelligence deep learning models in medical imaging generally pays more attention to accuracy rather than explain ability, resulting in the lack of explain ability, and thus hindering the practical clinical application of deep learning models. Therefore, the need to analyze the development of medical image analysis in the field of artificial intelligence and computer vision technology, and how to balance accuracy and interpretability to develop deep learning models that both doctors and patients can trust will become the research focus of the industry in the future.
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献