Abstract
Cloud service providers, including Google, Amazon, and Alibaba, have now launched machine-learning-as-a-service (MLaaS) platforms, allowing clients to access sophisticated cloud-based machine learning models via APIs. Unfortunately, however, the commercial value of these models makes them alluring targets for theft, and their strategic position as part of the IT infrastructure of many companies makes them an enticing springboard for conducting further adversarial attacks. In this paper, we put forth a novel and effective attack strategy, dubbed InverseNet, that steals the functionality of black-box cloud-based models with only a small number of queries. The crux of the innovation is that, unlike existing model extraction attacks that rely on public datasets or adversarial samples, InverseNet constructs inversed training samples to increase the similarity between the extracted substitute model and the victim model. Further, only a small number of data samples with high confidence scores (rather than an entire dataset) are used to reconstruct the inversed dataset, which substantially reduces the attack cost. Extensive experiments conducted on three simulated victim models and Alibaba Cloud's commercially-available API demonstrate that InverseNet yields a model with significantly greater functional similarity to the victim model than the current state-of-the-art attacks at a substantially lower query budget.
Publisher
International Joint Conferences on Artificial Intelligence Organization
Cited by
18 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. MEGEX: Data-Free Model Extraction Attack Against Gradient-Based Explainable AI;Proceedings of the 2nd ACM Workshop on Secure and Trustworthy Deep Learning Systems;2024-07-02
2. Revisiting Black-box Ownership Verification for Graph Neural Networks;2024 IEEE Symposium on Security and Privacy (SP);2024-05-19
3. SoK: Pitfalls in Evaluating Black-Box Attacks;2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML);2024-04-09
4. Model Extraction Attack against On-device Deep Learning with Power Side Channel;2024 25th International Symposium on Quality Electronic Design (ISQED);2024-04-03
5. PADVG: A Simple Baseline of Active Protection for Audio-Driven Video Generation;ACM Transactions on Multimedia Computing, Communications, and Applications;2024-03-08