Affiliation:
1. International School of Arts, Dalian University of Foreign Languages, Dalian, Liaoning, China
2. Department of Computer Science, Government College University, Faisalabad, Pakistan
Abstract
Through the application of computer vision and deep learning methodologies, real-time style transfer of images becomes achievable. This process involves the fusion of diverse artistic elements into a single image, resulting in the creation of innovative pieces of art. This article centers its focus on image style transfer within the realm of art education and introduces an ATT-CycleGAN model enriched with an attention mechanism to enhance the quality and precision of style conversion. The framework enhances the generators within CycleGAN. At first, images undergo encoder downsampling before entering the intermediate transformation model. In this intermediate transformation model, feature maps are acquired through four encoding residual blocks, which are subsequently input into an attention module. Channel attention is incorporated through multi-weight optimization achieved via global max-pooling and global average-pooling techniques. During the model’s training process, transfer learning techniques are employed to improve model parameter initialization, enhancing training efficiency. Experimental results demonstrate the superior performance of the proposed model in image style transfer across various categories. In comparison to the traditional CycleGAN model, it exhibits a notable increase in structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) metrics. Specifically, on the Places365 and selfi2anime datasets, compared with the traditional CycleGAN model, SSIM is increased by 3.19% and 1.31% respectively, and PSNR is increased by 10.16% and 5.02% respectively. These findings provide valuable algorithmic support and crucial references for future research in the fields of art education, image segmentation, and style transfer.
Reference34 articles.
1. CartoonGAN: generative adversarial networks for photo cartoonization;Chen,2018
2. Image-to-image translation via group-wise deep whitening-and-coloring transformation;Cho,2019
3. Stargan: unified generative adversarial networks for multi-domain image-to-image translation;Choi,2018
4. CPSSim: simulation framework for large-scale cyber-physical systems;Chu,2013
5. A novel GAN-based network for unmasking of masked face;Din;IEEE Access,2020