Author:
Kang Junegyu,Le Van Nhat Thang,Lee Dae-Woo,Kim Sungchan
Abstract
AbstractThe classification and localization of odontogenic lesions from panoramic radiographs is a challenging task due to the positional biases and class imbalances of the lesions. To address these challenges, a novel neural network, DOLNet, is proposed that uses mutually influencing hierarchical attention across different image scales to jointly learn the global representation of the entire jaw and the local discrepancy between normal tissue and lesions. The proposed approach uses local attention to learn representations within a patch. From the patch-level representations, we generate inter-patch, i.e., global, attention maps to represent the positional prior of lesions in the whole image. Global attention enables the reciprocal calibration of path-level representations by considering non-local information from other patches, thereby improving the generation of whole-image-level representation. To address class imbalances, we propose an effective data augmentation technique that involves merging lesion crops with normal images, thereby synthesizing new abnormal cases for effective model training. Our approach outperforms recent studies, enhancing the classification performance by up to 42.4% and 44.2% in recall and F1 scores, respectively, and ensuring robust lesion localization with respect to lesion size variations and positional biases. Our approach further outperforms human expert clinicians in classification by 10.7 % and 10.8 % in recall and F1 score, respectively.
Funder
National Research Foundation of Korea
The institute of Information & Communications Technology Planning & Evaluation
Publisher
Springer Science and Business Media LLC
Reference57 articles.
1. Schwendicke, F., Golla, T., Dreher, M. & Krois, J. Convolutional neural networks for dental image diagnostics: A scoping review. J. Dent. 91, 103226 (2019).
2. Silva, G., Oliveira, L. & Pithon, M. Automatic segmenting teeth in x-ray images: Trends, a novel data set, benchmarking and future perspectives. Expert Syst. Appl. 107, 15–31 (2018).
3. Amasya, H., Cesur, E., Yıldırım, D. & Orhan, K. Validation of cervical vertebral maturation stages: Artificial intelligence vs human observer visual analysis. Am. J. Orthod. Dentofac. Orthop. 158, e173–e179 (2020).
4. Oh, K., Oh, I.-S., Van NhatLe, T. & Lee, D.-W. Deep anatomical context feature learning for cephalometric landmark detection. IEEE J. Biomed. Health Inform. 20, 20 (2020).
5. Krois, J. et al. Deep learning for the radiographic detection of periodontal bone loss. Sci. Rep. 9, 1–6 (2019).
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献