Affiliation:
1. AbbVie Irvine California USA
Abstract
AbstractIntroductionThe application of artificial intelligence to facial aesthetics has been limited by the inability to discern facial zones of interest, as defined by complex facial musculature and underlying structures. Although semantic segmentation models (SSMs) could potentially overcome this limitation, existing facial SSMs distinguish only three to nine facial zones of interest.MethodsWe developed a new supervised SSM, trained on 669 high‐resolution clinical‐grade facial images; a subset of these images was used in an iterative process between facial aesthetics experts and manual annotators that defined and labeled 33 facial zones of interest.ResultsBecause some zones overlap, some pixels are included in multiple zones, violating the one‐to‐one relationship between a given pixel and a specific class (zone) required for SSMs. The full facial zone model was therefore used to create three sub‐models, each with completely non‐overlapping zones, generating three outputs for each input image that can be treated as standalone models. For each facial zone, the output demonstrating the best Intersection Over Union (IOU) value was selected as the winning prediction.ConclusionsThe new SSM demonstrates mean IOU values superior to manual annotation and landmark analyses, and it is more robust than landmark methods in handling variances in facial shape and structure.