Abstract
Abstract
Machine learning algorithms can be vulnerable to many forms of attacks aimed at leading the machine learning systems to make deliberate errors. The article provides an overview of attack technologies on the models and training datasets for the purpose of destructive (poisoning) effect. Experiments have been carried out to implement the existing attacks on various models. A comparative analysis of cyber-resistance of various models, most frequently used in operating systems, to destructive information actions has been prepared. The stability of various models most often used in applied problems to destructive information influences is investigated. The stability of the models is shown in case of poisoning up to 50% of the training data.
Subject
General Physics and Astronomy
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献