Affiliation:
1. KAIST, Visual Media Lab
2. Technical University of Munich
Abstract
AbstractFacial sketches are both a concise way of showing the identity of a person and a means to express artistic intention. While a few techniques have recently emerged that allow sketches to be extracted in different styles, they typically rely on a large amount of data that is difficult to obtain. Here, we propose StyleSketch, a method for extracting high‐resolution stylized sketches from a face image. Using the rich semantics of the deep features from a pretrained StyleGAN, we are able to train a sketch generator with 16 pairs of face and the corresponding sketch images. The sketch generator utilizes part‐based losses with two‐stage learning for fast convergence during training for high‐quality sketch extraction. Through a set of comparisons, we show that StyleSketch outperforms existing state‐of‐the‐art sketch extraction methods and few‐shot image adaptation methods for the task of extracting high‐resolution abstract face sketches. We further demonstrate the versatility of StyleSketch by extending its use to other domains and explore the possibility of semantic editing. The project page can be found in https://kwanyun.github.io/stylesketch_project.
Funder
Ministry of Culture, Sports and Tourism
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献