Affiliation:
1. University of Science and Technology of China, Hefei, China
Abstract
Online image sharing in social platforms can lead to undesired privacy disclosure. For example, some enterprises may detect these large volumes of uploaded images to do users’ in-depth preference analysis for commercial purposes. And their technology might be today’s most powerful learning model, deep neural network (DNN). To just elude these automatic DNN detectors without affecting visual quality of human eyes, we design and implement a novel Stealth algorithm, which makes the automatic detector blind to the existence of objects in an image, by crafting a kind of adversarial examples. It is just like all objects disappear after wearing an “invisible cloak” from the view of the detector. Then we evaluate the effectiveness of Stealth algorithm through our newly defined measurement, named privacy insurance. The results indicate that our scheme has considerable success rate to guarantee privacy compared with other methods, such as mosaic, blur, and noise. Better still, Stealth algorithm has the smallest impact on image visual quality. Meanwhile, we set a user adjustable parameter called cloak thickness for regulating the perturbation intensity. Furthermore, we find that the processed images have transferability property; that is, the adversarial images generated for one particular DNN will influence the others as well.
Funder
National Natural Science Foundation of China
Subject
Computer Networks and Communications,Information Systems
Cited by
24 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Facial Soft-biometrics Obfuscation through Adversarial Attacks;ACM Transactions on Multimedia Computing, Communications, and Applications;2024-09-12
2. Activity Recognition Protection for IoT Trigger-Action Platforms;2024 IEEE 9th European Symposium on Security and Privacy (EuroS&P);2024-07-08
3. Privacy Protection for Image Sharing Using Reversible Adversarial Examples;ICC 2024 - IEEE International Conference on Communications;2024-06-09
4. AdvRevGAN: On Reversible Universal Adversarial Attacks for Privacy Protection Applications;2023 IEEE 33rd International Workshop on Machine Learning for Signal Processing (MLSP);2023-09-17
5. A novel abstraction for security configuration in virtual networks;Computer Networks;2023-06