Resilience and Resilient Systems of Artificial Intelligence: Taxonomy, Models and Methods
-
Published:2023-03-18
Issue:3
Volume:16
Page:165
-
ISSN:1999-4893
-
Container-title:Algorithms
-
language:en
-
Short-container-title:Algorithms
Author:
Moskalenko Viacheslav1ORCID, Kharchenko Vyacheslav2ORCID, Moskalenko Alona1, Kuzikov Borys1ORCID
Affiliation:
1. Department of Computer Science, Sumy State University, 2, Mykola Sumtsova St., 40007 Sumy, Ukraine 2. Department of Computer Systems, Networks and Cybersecurity, National Aerospace University “KhAI”, 17, Chkalov Str., 61070 Kharkiv, Ukraine
Abstract
Artificial intelligence systems are increasingly being used in industrial applications, security and military contexts, disaster response complexes, policing and justice practices, finance, and healthcare systems. However, disruptions to these systems can have negative impacts on health, mortality, human rights, and asset values. The protection of such systems from various types of destructive influences is thus a relevant area of research. The vast majority of previously published works are aimed at reducing vulnerability to certain types of disturbances or implementing certain resilience properties. At the same time, the authors either do not consider the concept of resilience as such, or their understanding varies greatly. The aim of this study is to present a systematic approach to analyzing the resilience of artificial intelligence systems, along with an analysis of relevant scientific publications. Our methodology involves the formation of a set of resilience factors, organizing and defining taxonomic and ontological relationships for resilience factors of artificial intelligence systems, and analyzing relevant resilience solutions and challenges. This study analyzes the sources of threats and methods to ensure each resilience properties for artificial intelligence systems. As a result, the potential to create a resilient artificial intelligence system by configuring the architecture and learning scenarios is confirmed. The results can serve as a roadmap for establishing technical requirements for forthcoming artificial intelligence systems, as well as a framework for assessing the resilience of already developed artificial intelligence systems.
Subject
Computational Mathematics,Computational Theory and Mathematics,Numerical Analysis,Theoretical Computer Science
Reference166 articles.
1. Xu, J., Kovatsch, M., Mattern, D., Mazza, F., Harasic, M., Paschke, A., and Lucia, S. (2022). A Review on AI for Smart Manufacturing: Deep Learning Challenges and Solutions. Appl. Sci., 12. 2. Khalid, F., Hanif, M.A., and Shafique, M. (2021). Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and Fault-Injection Attacks. arXiv. 3. Gongye, C., Li, H., Zhang, X., Sabbagh, M., Yuan, G., Lin, X., Wahl, T., and Fei, Y. (2020, January 2–5). New passive and active attacks on deep neural networks in medical applications. Proceedings of the ICCAD ‘20: IEEE/ACM International Conference on Computer-Aided Design, Virtual Event USA. 4. Caccia, M., Rodríguez, P., Ostapenko, O., Normandin, F., Lin, M., Caccia, L., Laradji, I., Rish, I., Lacoste, A., and Vazquez, D. (2020, January 6–12). Online fast adaptation and knowledge accumulation (OSAKA): A new approach to continual learning. Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, BC, Canada. 5. Margatina, K., Vernikos, G., Barrault, L., and Aletras, N. (2021). Active Learning by Acquiring Contrastive Examples. arXiv.
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|