Abstract
Abstract
Aiming at the problems of large loss of image patch embedding information and lack of stylized features in the generated image during transformer style migration, a style migration model combining contrast learning with transformer backbone generative adversarial network is established. Firstly, overlapping patch embedding is used to serialize the input images to get richer features; then the style and content contrast loss of contrast learning is used to mine the non-significant information between stylized images to reduce the style and content loss; finally, a multi-scale discriminator discriminates the true and false of stylized images to optimize the generator to get more stylistic features. The results on MS-COCO and WikiArt datasets show that the model improves the effectiveness and transformation efficacy of stylized images in style migration.
Reference11 articles.
1. Advances in deep learning-based image style migration [J];Chen;Journal of Computer Engineering & Applications,2021
2. GANILLA: Generative adversarial networks for image to illustration translation [J];Hicsonmez;Image and Vision Computing,2020