BACKGROUND
The inferior alveolar nerve (IAN) innervates and regulates the sensation of the mandibular teeth and lower lip. The position of the IAN should be monitored during surgery to prevent damage. Therefore, a study using artificial intelligence (AI) was planned to image and track the position of the IAN automatically for a quicker and safer surgery.
OBJECTIVE
In this study, we segmented the precise position of the IAN using AI. The accuracy of this technique was evaluated by comparing the position with the position manually specified by a specialist, and segmentation accuracy and annotation efficiency were found to be improved with learning.
METHODS
A total of 138 cone-beam computed tomography datasets (Internal: 98, External: 40) collected from multiple centers (three hospitals) were used in the study. A customized 3D nnU-Net was used for image segmentation. Active learning, which consists of three steps, was carried out in iterations for 83 datasets with cumulative additions after each step. Subsequently, the accuracy of the model for IAN segmentation was evaluated using the residual dataset. We compared the accuracy by deriving the dice similarity coefficient (DSC) value and the segmentation time for each learning step. In addition, visual scoring was considered to comparatively evaluate the manual and automatic segmentation.
RESULTS
After learning, the DSC gradually increased to 0.48 ± 0.11 to 0.50 ± 0.11, and 0.58 ± 0.08. The DSC for the external dataset was 0.49 ± 0.12. The times required for segmentation were 124.8, 143.4, and 86.4 s, showing a large decrease at the final stage. In visual scoring, the accuracy of manual segmentation was found to be higher than that of automatic segmentation.
CONCLUSIONS
The deep active learning framework can serve as a fast, accurate, and robust clinical tool for demarcating IAN location.