Author:
Zhou Zhili,Wang Meimin,Cao Yi,Su Yuecheng
Abstract
As one of the important techniques for protecting the copyrights of digital images, content-based image copy detection has attracted a lot of attention in the past few decades. The traditional content-based copy detection methods usually extract local hand-crafted features and then quantize these features to visual words by the bag-of-visual-words (BOW) model to build an inverted index file for rapid image matching. Recently, deep learning features, such as the features derived from convolutional neural networks (CNN), have been proven to outperform the hand-crafted features in many applications of computer vision. However, it is not feasible to directly apply the existing global CNN features for copy detection, since they are usually sensitive to partial content-discarded attacks, such as copping and occlusion. Thus, we propose a local CNN feature-based image copy detection method with contextual hash embedding. We first extract the local CNN features from images and then quantize them to visual words to construct an index file. Then, as the BOW quantization process decreases the discriminability of these features to some extent, a contextual hash sequence is captured from a relatively large region surrounding each CNN feature and then is embedded into the index file to improve the feature’s discriminability. Extensive experimental results demonstrate that the proposed method achieves a superior performance compared to the related works in the copy detection task.
Funder
National Natural Science Foundation of China
MOST
Subject
General Mathematics,Engineering (miscellaneous),Computer Science (miscellaneous)
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献