Author:
Parfenov Denis,Grishina Lubov,Zhigalov Artur,Parfenov Anton
Abstract
Machine learning solutions have been successfully applied in many aspects, so it is now important to ensure the security of the machine learning models themselves and develop appropriate solutions and approaches. In this study, we focused on adversarial attacks. The vector of this type of attack is aimed at distorting the results of machine models. In this study, we selected the IoTID20 and CIC-IoT-2023 datasets used to detect anomalous activity in IoT networks. For this data, this work examines the effectiveness of the influence of adversarial attacks based on data leakage on ML models deployed in cloud services. The results of the study highlight the importance of continually updating and developing methods for detecting and preventing cyberattacks in the field of machine learning, and application examples within the experiments demonstrate the impact of adversarial attacks on services in IoT networks.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献