Author:
Matsuda Shinpei,Miyamoto Takashi,Yoshimura Hitoshi,Hasegawa Tatsuhito
Abstract
AbstractForensic dental examination has played an important role in personal identification (PI). However, PI has essentially been based on traditional visual comparisons of ante- and postmortem dental records and radiographs, and there is no globally accepted PI method based on digital technology. Although many effective image recognition models have been developed, they have been underutilized in forensic odontology. The aim of this study was to verify the usefulness of PI with paired orthopantomographs obtained in a relatively short period using convolutional neural network (CNN) technologies. Thirty pairs of orthopantomographs obtained on different days were analyzed in terms of the accuracy of dental PI based on six well-known CNN architectures: VGG16, ResNet50, Inception-v3, InceptionResNet-v2, Xception, and MobileNet-v2. Each model was trained and tested using paired orthopantomographs, and pretraining and fine-tuning transfer learning methods were validated. Higher validation accuracy was achieved with fine-tuning than with pretraining, and each architecture showed a detection accuracy of 80.0% or more. The VGG16 model achieved the highest accuracy (100.0%) with pretraining and with fine-tuning. This study demonstrated the usefulness of CNN for PI using small numbers of orthopantomographic images, and it also showed that VGG16 was the most useful of the six tested CNN architectures.
Publisher
Springer Science and Business Media LLC
Cited by
23 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献