A Dual-Domain Perceptual Framework for Generating Visual Inconspicuous Counterparts
-
Published:2017-05-05
Issue:2
Volume:13
Page:1-21
-
ISSN:1551-6857
-
Container-title:ACM Transactions on Multimedia Computing, Communications, and Applications
-
language:en
-
Short-container-title:ACM Trans. Multimedia Comput. Commun. Appl.
Author:
Su Zhuo1,
Zeng Kun1,
Li Hanhui1,
Luo Xiaonan2
Affiliation:
1. Sun Yat-sen University, Guangzhou, China
2. Sun Yat-sen University, Guangzhou, China,
Abstract
For a given image, it is a challenging task to generate its corresponding counterpart with visual inconspicuous modification. The complexity of this problem reasons from the high correlativity between the editing operations and vision perception. Essentially, a significant requirement that should be emphasized is how to make the object modifications hard to be found visually in the generative counterparts. In this article, we propose a novel dual-domain perceptual framework to generate visual inconspicuous counterparts, which applies the perceptual bidirectional similarity metric (PBSM) and appearance similarity metric (ASM) to create the dual-domain perception error minimization model. The candidate targets are yielded by the well-known PatchMatch model with the strokes-based interactions and selective object library. By the dual-perceptual evaluation index, all candidate targets are sorted to select out the best result. For demonstration, a series of objective and subjective measurements are used to evaluate the performance of our framework.
Funder
Natural Science Foundation of Guangdong Province
Sun Yat-sen University
National Natural Science Foundation of China
Science and Technology Planning Project of Guangdong Province
Fundamental Research Funds for the Central Universities
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Hardware and Architecture
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献