Affiliation:
1. Department of Conservative Dentistry and Periodontology, LMU University Hospital, LMU Munich, 80336 Munich, Germany
2. Institute for Software Engineering, University of Duisburg-Essen, 45127 Essen, Germany
Abstract
Several artificial intelligence-based models have been presented for the detection of periodontal bone loss (PBL), mostly using convolutional neural networks, which are the state of the art in deep learning. Given the emerging breakthrough of transformer networks in computer vision, we aimed to evaluate various models for automatized PBL detection. An image data set of 21,819 anonymized periapical radiographs from the upper/lower and anterior/posterior regions was assessed by calibrated dentists according to PBL. Five vision transformer networks (ViT-base/ViT-large from Google, BEiT-base/BEiT-large from Microsoft, DeiT-base from Facebook/Meta) were utilized and evaluated. Accuracy (ACC), sensitivity (SE), specificity (SP), positive/negative predictive value (PPV/NPV) and area under the ROC curve (AUC) were statistically determined. The overall diagnostic ACC and AUC values ranged from 83.4 to 85.2% and 0.899 to 0.918 for all evaluated transformer networks, respectively. Differences in diagnostic performance were evident for lower (ACC 94.1–96.7%; AUC 0.944–0.970) and upper anterior (86.7–90.2%; 0.948–0.958) and lower (85.6–87.2%; 0.913–0.937) and upper posterior teeth (78.1–81.0%; 0.851–0.875). In this study, only minor differences among the tested networks were detected for PBL detection. To increase the diagnostic performance and to support the clinical use of such networks, further optimisations with larger and manually annotated image data sets are needed.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献