Robustness of Updatable Learning-based Index Advisors against Poisoning Attack

Author:

Zheng Yihang1ORCID,Lin Chen2ORCID,Lyu Xian3ORCID,Zhou Xuanhe4ORCID,Li Guoliang4ORCID,Wang Tianqing5ORCID

Affiliation:

1. Institute of Artificial Intelligence, Xiamen University, Xiamen, China

2. School of Informatics, Xiamen University & Shanghai Artificial Intelligence Laboratory, Xiamen, China

3. School of Informatics, Xiamen University, Xiamen, China

4. Department of Computer Science, Tsinghua University, Beijing, China

5. Huawei Company, Beijing, China

Abstract

Despite the promising performance of recent learning-based Index Advisors (IAs), they exhibited the robustness issue when poisoning attacks polluted training data. This paper presents the first attempt to study the robustness of updatable learning-based IAs against poisoning attack, i.e., whether the IAs can maintain robust performance if their training/updating is disturbed by injecting an extraneous toxic workload. The goal is to provide an opaque-box stress test that is generally effective in evaluating the robustness of different learning-based IAs without using the users' private data. There are three challenges, i.e., how to probe "index preference" from opaque-box IAs, how to design effective injecting strategies even if the IAs can be fine-tuned, and how to generate queries to meet the specific constraints for IA probing and injecting. The presented stress-test framework PIPA consists of a probing stage, an injecting stage, and a query generator. To address the first challenge, the probing stage estimates the IA's indexing preference by observing its responses to the probing workload. To address the second challenge, the injecting stage injects workloads that spoof the IA to demote the top-ranked indexes in the estimated indexing preference and promote mid-ranked indexes. The stress test is effective because the IA is trapped in a local optimum even after fine-tuning. To address the third challenge, PIPA utilizes IABART (Index Aware BART) to generate queries that can be optimized by building indexes on a given set of indexes. Extensive experiments on different benchmarks against various learning-based IAs demonstrate the effectiveness of PIPA and that existing learning-based IAs are non-robust when faced with even a subtle amount of injected extraneous toxic workloads.

Funder

CCF-Huawei Populus Grove Fund

National Key R&D Program of China

Natural Science Foundation of China

Publisher

Association for Computing Machinery (ACM)

Reference45 articles.

1. Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks

2. Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim vS rndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. 2013a. Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23--27, 2013, Proceedings, Part III 13. Springer, 387--402.

3. Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389 (2012).

4. Is data clustering in adversarial settings secure?

5. A Survey of Monte Carlo Tree Search Methods

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3