Abstract
AbstractThe computational modeling and analysis of traditional Chinese painting rely heavily on cognitive classification based on visual perception. This approach is crucial for understanding and identifying artworks created by different artists. However, the effective integration of visual perception into artificial intelligence (AI) models remains largely unexplored. Additionally, the classification research of Chinese painting faces certain challenges, such as insufficient investigation into the specific characteristics of painting images for author classification and recognition. To address these issues, we propose a novel framework called multi-channel color fusion network (MCCFNet), which aims to extract visual features from diverse color perspectives. By considering multiple color channels, MCCFNet enhances the ability of AI models to capture intricate details and nuances present in Chinese painting. To improve the performance of the DenseNet model, we introduce a regional weighted pooling (RWP) strategy specifically designed for the DenseNet169 architecture. This strategy enhances the extraction of highly discriminative features. In our experimental evaluation, we comprehensively compared the performance of our proposed MCCFNet model against six state-of-the-art models. The comparison was conducted on a dataset consisting of 2436 TCP samples, derived from the works of 10 renowned Chinese artists. The evaluation metrics employed for performance assessment were Top-1 Accuracy and the area under the curve (AUC). The experimental results have shown that our proposed MCCFNet model significantly outperform all other benchmarking methods with the highest classification accuracy of 98.68%. Meanwhile, the classification accuracy of any deep learning models on TCP can be much improved when adopting our proposed framework.
Funder
Shaanxi Jishui Landscape Engineering Co., Ltd
Publisher
Springer Science and Business Media LLC
Subject
Cognitive Neuroscience,Computer Science Applications,Computer Vision and Pattern Recognition
Reference45 articles.
1. Jiang W, Wang Z, Jin JS, Han Y, Sun M. DCT-CNN-based classification method for the Gongbi and Xieyi techniques of Chinese ink-wash paintings. Neurocomputing. 2019;330:280–6.
2. Wu CQ. Neural substrates and temporal characteristics for consciousness, brief sensory memory, and short-term memory (STM) systems. Proceedings
of the Annual Meeting of the Cognitive Science Society. 2005;27(27):2577.
3. Li J, Wang JZ. Studying digital imagery of ancient paintings by mixtures of stochastic models. IEEE Trans Image Process. 2004;13(3):340–53.
4. Sun M, Zhang D, Wang Z, Ren J, Jin JS. Monte Carlo convex hull model for classification of traditional Chinese paintings. Neurocomputing. 2016;171:788–97.
5. Sheng J, Jiang J. Style-based classification of Chinese ink and wash paintings. Opt Eng. 2013;52(9):093101–1 - 093101–8.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献