Author:
CH.E.N. Sai Priya ,Manas Kumar Yogi
Abstract
Artificial Intelligence (AI) has witnessed significant advancements in recent years, enabling its widespread adoption across various domains. However, this progress has also given rise to new challenges, particularly in the context of adversarial machine learning. Adversarial attacks exploit vulnerabilities in AI models, resulting in their misclassification or misbehaviour. To address this critical issue, it is crucial to develop trustworthy AI systems that can withstand such adversarial threats. This paper presents a comprehensive study that covers the types of adversarial machine learning cyber-attacks, methods employed by adversaries to launch such attacks, effective defence mechanisms, and potential future directions in the field. It starts by exploring various types of adversarial ML attacks, characteristics and potential consequences of each attack type, emphasizing the risks they pose to privacy, security, and fairness in AI systems and delving into the methods employed by adversaries to launch adversarial ML attacks. By understanding the tactics used by adversaries, researchers and practitioners can develop robust defence mechanisms that can withstand these attacks. Building upon this understanding, a range of defence strategies can be invented for defending against adversarial ML attacks and emerging research areas, such as the integration of secure multi-party computation, differential privacy, and federated learning are used to enhance the resilience of AI models. By understanding the nature of adversarial attacks and implementing effective defence strategies, AI systems can be fortified against malicious manipulations. The findings of this study contribute to the development of trustworthy AI systems, ensuring their resilience, transparency, and fairness.
Publisher
Inventive Research Organization
Subject
Computer Networks and Communications,Hardware and Architecture,Software