European Artificial Intelligence Act: an AI security approach

Author:

Kalodanis Konstantinos,Rizomiliotis Panagiotis,Anagnostopoulos Dimosthenis

Abstract

Purpose The purpose of this paper is to highlight the key technical challenges that derive from the recently proposed European Artificial Intelligence Act and specifically, to investigate the applicability of the requirements that the AI Act mandates to high-risk AI systems from the perspective of AI security. Design/methodology/approach This paper presents the main points of the proposed AI Act, with emphasis on the compliance requirements of high-risk systems. It matches known AI security threats with the relevant technical requirements, it demonstrates the impact that these security threats can have to the AI Act technical requirements and evaluates the applicability of these requirements based on the effectiveness of the existing security protection measures. Finally, the paper highlights the necessity for an integrated framework for AI system evaluation. Findings The findings of the EU AI Act technical assessment highlight the gap between the proposed requirements and the available AI security countermeasures as well as the necessity for an AI security evaluation framework. Originality/value AI Act, high-risk AI systems, security threats, security countermeasures.

Publisher

Emerald

Subject

Management of Technology and Innovation,Information Systems and Management,Computer Networks and Communications,Information Systems,Software,Management Information Systems

Reference65 articles.

1. Adam, K. (2020), “The U.K. used an algorithm to estimate exam results. The calculations favored elites”, available at: www.washingtonpost.com/world/europe/the-uk-used-an-algorithm-to-estimate-exam-results-the-calculations-favored-elites/2020/08/17/2b116d48-e091-11ea-82d8-5e55d47e90ca_story.html (accessed 10 February 2022).

2. Multiple classifier systems for robust classifier design in adversarial environments;International Journal of Machine Learning and Cybernetics,2010

3. Bagging classifiers for fighting poisoning attacks in adversarial classification tasks;Multiple Classifier Systems (MCS) 2011,2011

4. Notes from the AI frontier. modeling the impact of AI on the world economy,2018

5. Carlini, N., et al. (2016), “Defensive distillation is not robust to adversarial examples”, unpublished manuscript, arXiv:1607.04311, available at: arxiv.org/abs/1607.04311 (accessed 19 May 2022).

Cited by 2 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Ethical Implications and Principles of Using Artificial Intelligence Models in the Classroom: A Systematic Literature Review;International Journal of Interactive Multimedia and Artificial Intelligence;2024

2. Ethical Risks, Concerns, and Practices of Affective Computing: A Thematic Analysis;2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW);2023-09-10

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3