Affiliation:
1. Shanghai Jiao Tong University
Abstract
Recently, prompt tuning has shown remarkable performance as a new learning paradigm, which freezes pre-trained language models (PLMs) and only tunes some soft prompts. A fixed PLM only needs to be loaded with different prompts to adapt different downstream tasks. However, the prompts associated with PLMs may be added with some malicious behaviors, such as backdoors. The victim model will be implanted with a backdoor by using the poisoned prompt. In this paper, we propose to obtain the poisoned prompt for PLMs and corresponding downstream tasks by prompt tuning. We name this Poisoned Prompt Tuning method "PPT". The poisoned prompt can lead a shortcut between the specific trigger word and the target label word to be created for the PLM. So the attacker can simply manipulate the prediction of the entire model by just a small prompt. Our experiments on various text classification tasks show that PPT can achieve a 99% attack success rate with almost no accuracy sacrificed on original task. We hope this work can raise the awareness of the possible security threats hidden in the prompt.
Publisher
International Joint Conferences on Artificial Intelligence Organization
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. NWS: Natural Textual Backdoor Attacks Via Word Substitution;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14
2. Backdoor Attacks to Deep Neural Networks: A Survey of the Literature, Challenges, and Future Research Directions;IEEE Access;2024
3. On the Vulnerabilities of Text-to-SQL Models;2023 IEEE 34th International Symposium on Software Reliability Engineering (ISSRE);2023-10-09
4. FedPrompt: Communication-Efficient and Privacy-Preserving Prompt Tuning in Federated Learning;ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2023-06-04