Affiliation:
1. College of Cyber Security Jinan University Guangzhou China
2. National Joint Engineering Research Center of Network Security Detection and Protection Technology Jinan University Guangzhou China
3. College of Information Science and Technology Donghua University Shanghai China
Abstract
AbstractWith the development of deep learning and federated learning (FL), federated intrusion detection systems (IDSs) based on deep learning have played a significant role in securing industrial control systems (ICSs). However, adversarial attacks on ICSs may compromise the ability of deep learning‐based IDSs to accurately detect cyberattacks, leading to serious consequences. Moreover, in the process of generating adversarial samples, the selection of replacement models lacks an effective method, which may not fully expose the vulnerabilities of the models. The authors first propose an automated FL‐based method to generate adversarial samples in ICSs, called AFL‐GAS, which uses the principle of transfer attack and fully considers the importance of replacement models during the process of adversarial sample generation. In the proposed AFL‐GAS method, a lightweight neural architecture search method is developed to find the optimised replacement model composed of a combination of four lightweight basic blocks. Then, to enhance the adversarial robustness, the authors propose a multi‐objective neural architecture search‐based IDS method against adversarial attacks in ICSs, called MoNAS‐IDSAA, by considering both classification performance on regular samples and adversarial robustness simultaneously. The experimental results on three widely used intrusion detection datasets in ICSs, such as secure water treatment (SWaT), Water Distribution, and Power System Attack, demonstrate that the proposed AFL‐GAS method has obvious advantages in evasion rate and lightweight compared with other four methods. Besides, the proposed MoNAS‐IDSAA method not only has a better classification performance, but also has obvious advantages in model adversarial robustness compared with one manually designed federated adversarial learning‐based IDS method.
Funder
Natural Science Foundation of Guangdong Province
National Natural Science Foundation of China
Publisher
Institution of Engineering and Technology (IET)