Affiliation:
1. National Pilot Software Engineering School, Beijing University of Posts and Telecommunications, Beijing 100876, China
2. China Mobile Research Institute, Beijing, China
Abstract
Artistic portrait drawing (APDrawing) generation has seen progress in recent years. However, due to the naturally high scarcity and artistry, it is difficult to collect large-scale labeled and paired data and generally divide drawing styles into several specific recognized categories. Existing works suffer from the limited labeled data and naive manual division of drawing styles according to the corresponding artists. They cannot adapt to the actual situations, for example, a single artist might have multiple drawing styles and APDrawings from different artists might share similar styles. In this paper, we propose to use unlabeled and unpaired data and perform the task in an unsupervised manner. Without manual division of drawing styles, we take each portrait drawing as a unique style and introduce self-supervised feature learning to learn free styles for unlabeled portrait drawings. Besides, we devise a style bank and a decoupled cycle structure to take over two main considerations in the task: generation quality and style control. Extensive experiments show that our model is more adaptable to different style inputs than state-of-the-art methods.
Funder
National Natural Science Foundation of China
Subject
Artificial Intelligence,Human-Computer Interaction,Theoretical Computer Science,Software
Reference41 articles.
1. Unsupervised image-to-image translation networks;M. Y. Liu;Advances in Neural Information Processing Systems,2017
2. Dualgan: unsupervised dual learning for image-to-image translation;Z. Yi
3. Direction-aware neural style transfer with texture enhancement
4. Semantic style transfer and turning two-bit doodles into fine artworks;A. J. Champandard,2016
5. Learning Linear Transformations for Fast Image and Video Style Transfer