SPA-Net: A Deep Learning Approach Enhanced Using a Span-Partial Structure and Attention Mechanism for Image Copy-Move Forgery Detection
Author:
Zhao Kaiqi1ORCID, Yuan Xiaochen2ORCID, Xie Zhiyao2ORCID, Xiang Yan1, Huang Guoheng3ORCID, Feng Li1ORCID
Affiliation:
1. School of Computer Science and Engineering, Macau University of Science and Technology, Macao 999078, China 2. Faculty of Applied Sciences, Macao Polytechnic University, Macao 999078, China 3. School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
Abstract
With the wide application of visual sensors and development of digital image processing technology, image copy-move forgery detection (CMFD) has become more and more prevalent. Copy-move forgery is copying one or several areas of an image and pasting them into another part of the same image, and CMFD is an efficient means to expose this. There are improper uses of forged images in industry, the military, and daily life. In this paper, we present an efficient end-to-end deep learning approach for CMFD, using a span-partial structure and attention mechanism (SPA-Net). The SPA-Net extracts feature roughly using a pre-processing module and finely extracts deep feature maps using the span-partial structure and attention mechanism as a SPA-net feature extractor module. The span-partial structure is designed to reduce the redundant feature information, while the attention mechanism in the span-partial structure has the advantage of focusing on the tamper region and suppressing the original semantic information. To explore the correlation between high-dimension feature points, a deep feature matching module assists SPA-Net to locate the copy-move areas by computing the similarity of the feature map. A feature upsampling module is employed to upsample the features to their original size and produce a copy-move mask. Furthermore, the training strategy of SPA-Net without pretrained weights has a balance between copy-move and semantic features, and then the module can capture more features of copy-move forgery areas and reduce the confusion from semantic objects. In the experiment, we do not use pretrained weights or models from existing networks such as VGG16, which would bring the limitation of the network paying more attention to objects other than copy-move areas.To deal with this problem, we generated a SPANet-CMFD dataset by applying various processes to the benchmark images from SUN and COCO datasets, and we used existing copy-move forgery datasets, CMH, MICC-F220, MICC-F600, GRIP, Coverage, and parts of USCISI-CMFD, together with our generated SPANet-CMFD dataset, as the training set to train our model. In addition, the SPANet-CMFD dataset could play a big part in forgery detection, such as deepfakes. We employed the CASIA and CoMoFoD datasets as testing datasets to verify the performance of our proposed method. The Precision, Recall, and F1 are calculated to evaluate the CMFD results. Comparison results showed that our model achieved a satisfactory performance on both testing datasets and performed better than the existing methods.
Funder
Research Project of the Macao Polytechnic University Science and Technology Development Fund of Macau SAR
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference40 articles.
1. Nonintrusive component forensics of visual sensors using output images;Swaminathan;IEEE Trans. Inf. Forensics Secur.,2007 2. Yao, H., Xu, M., Qiao, T., Wu, Y., and Zheng, N. (2020). Image forgery detection and localization via a reliability fusion map. Sensors, 20. 3. Pu, H., Huang, T., Weng, B., Ye, F., and Zhao, C. (2021). Overcome the Brightness and Jitter Noises in Video Inter-Frame Tampering Detection. Sensors, 21. 4. Seo, Y., and Kook, J. (2023). DRRU-Net: DCT-Coefficient-Learning RRU-Net for Detecting an Image-Splicing Forgery. Appl. Sci., 13. 5. Lin, Y.K., and Yen, T.Y. (2023). A Meta-Learning Approach for Few-Shot Face Forgery Segmentation and Classification. Sensors, 23.
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|