Feature Separation and Fusion to Optimise the Migration Model of Mural Painting Style in Tombs
-
Published:2024-03-26
Issue:7
Volume:14
Page:2784
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Wu Meng12ORCID, Li Minghui1, Zhang Qunxi3
Affiliation:
1. School of Information and Control Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China 2. Institute for Interdisciplinary and Innovate Research, Xi’an University of Architecture and Technology, Xi’an 710055, China 3. Shaanxi History Museum, Xi’an 710061, China
Abstract
Tomb murals are different from cave temple murals and temple murals, as they are underground cultural relics, their painting style is unique, solemn, and austere, and the performance image is characterised by simple colours, low contrast, and fewer survivors. During the digital restoration process, it is important to have sufficient reference samples to ensure the accuracy of the restoration. In addition, the style of mural paintings in the tombs varies greatly from other styles of murals and types of word paintings. Therefore, learning the unique artistic style of tomb murals, providing stylistically consistent training samples for digital restoration, and overcoming the problems of dim lighting and complex surface granularity of tomb murals are all necessary for research. This paper proposes a generative adversarial network algorithm that separates and fuses style features to enhance the generative network’s ability to acquire image information. The algorithm extracts underlying and surface style feature details of the image to be tested and conducts fusion generation experiments. The generative network’s parsing layer modifies the input noise tensor and optimises the corresponding weights to prevent misalignment between drawing lines and fresco cracks. Finally, to optimise the fresco generation effect, we add the corresponding loss function in the discriminator. The tomb murals dataset was established for experiments and tests, and quantitatively and qualitatively analysed with other style migration models, and SSIM, FID, LPIPS and NIQE were used as evaluation indexes. The results were 0.97, 269.579, 0.425 and 3.250, respectively, and the effect of style migration of this paper’s method was significantly higher than that of the control group model.
Funder
Cross-disciplinary Fund of Xi’an University of Architecture and Technology National Natural Science Foundation of China Ministry of Housing and Urban-Rural Development
Reference57 articles.
1. Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv. 2. Wu, M., Chang, X., and Wang, J. (2023). Fragments Inpainting for Tomb Murals Using a Dual-Attention Mechanism GAN with Improved Generators. Appl. Sci., 13. 3. Gatys, L.A., Ecker, A.S., and Bethge, M. (2016, January 27–30). Image style transfer using convolutional neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA. 4. Levin, A., Lischinski, D., and Weiss, Y. (2004). ACM SIGGRAPH 2004 Papers, ACM. 5. Brox, T., Van Den Boomgaard, R., Lauze, F., Van De Weijer, J., Weickert, J., Mrázek, P., and Kornprobst, P. (2006). Adaptive Structure Tensors and Their Applications, Springer.
|
|