Abstract
Abstract
Background
Three-dimensional facial stereophotogrammetry, a convenient, noninvasive and highly reliable evaluation tool, has in recent years shown great potential in plastic surgery for preoperative planning and evaluating treatment efficacy. However, it requires manual identification of facial landmarks by trained evaluators to obtain anthropometric data, which takes much time and effort. Automatic 3D facial landmark localization has the potential to facilitate fast data acquisition and eliminate evaluator error.
Objectives
The aim of this work was to describe a novel deep-learning method based on dimension transformation and key-point detection for automated 3D perioral landmark annotation.
Methods
After transforming a 3D facial model into 2D images, High-Resolution Network is implemented for key-point detection. The 2D coordinates of key points are then mapped back to the 3D model using mathematical methods to obtain the 3D landmark coordinates. This program was trained with 120 facial models and validated in 50 facial models.
Results
Our approach achieved a satisfactory mean [standard deviation] accuracy of 1.30 [0.68] mm error in landmark detection with a mean processing time of 5.2 [0.21] seconds per model. Subsequent analysis based on these landmarks showed mean errors of 0.87 [1.02] mm for linear measurements and 5.62° [6.61°] for angular measurements.
Conclusions
This automated 3D perioral landmarking method could serve as an effective tool that enables fast and accurate anthropometric analysis of lip morphology for plastic surgery and aesthetic procedures.
Funder
National High Level Hospital Clinical Research Funding
Collaborative Innovation Fund of Chinese Academy of Medical Sciences
Publisher
Oxford University Press (OUP)