Affiliation:
1. School of Computer Science Dalian Minzu University Dalian China
2. National Centre for Computer Animation Bournemouth University Bournemouth UK
Abstract
AbstractMakeup in real life varies widely and is personalized, presenting a key challenge in makeup transfer. Most previous makeup transfer techniques divide the face into distinct regions for color transfer, frequently neglecting details like eyeshadow and facial contours. Given the successful advancements of Transformers in various visual tasks, we believe that this technology holds large potential in addressing pose, expression, and occlusion differences. To explore this, we propose novel pipeline which combines well‐designed Convolutional Neural Network with Transformer to leverage the advantages of both networks for high‐quality facial makeup transfer. This enables hierarchical extraction of both local and global facial features, facilitating the encoding of facial attributes into pyramid feature maps. Furthermore, a Low‐Frequency Information Fusion Module is proposed to address the problem of large pose and expression variations which exist between the source and reference faces by extracting makeup features from the reference and adapting them to the source. Experiments demonstrate that our method produces makeup faces that are visually more detailed and realistic, yielding superior results.
Funder
H2020 Marie Skłodowska-Curie Actions
Natural Science Foundation of Liaoning Province
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献