Abstract
Abstract
The security issues of machine learning have aroused much attention and model extraction attack is one of them. The definition of model extraction attack is that an adversary can collect data through query access to a victim model and train a substitute model with it in order to steal the functionality of the target model. At present, most of the related work has focused on the research of model extraction attack against discriminative models while this paper pays attention to deep generative models. First, considering the difference of an adversary` goals, the attacks are taxonomized into two different types: accuracy extraction attack and fidelity extraction attack and the effect is evaluated by 1-NN accuracy. Attacks among three main types of deep generative models and the influence of the number of queries are also researched. Finally, this paper studies different defensive techniques to safeguard the models according to their architectures.
Subject
General Physics and Astronomy
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. GNMS: A novel method for model stealing based on GAN;2023 Eleventh International Conference on Advanced Cloud and Big Data (CBD);2023-12-18
2. Protection of Computational Machine Learning Models against Extraction Threat;Automatic Control and Computer Sciences;2023-12
3. A Taxonomic Survey of Model Extraction Attacks;2023 IEEE International Conference on Cyber Security and Resilience (CSR);2023-07-31