Abstract
Background: Digital technologies are an important factor currently driving society’ development in various areas, affecting not only traditional spheres, such as medicine, manufacturing, and education, but also legal relations, including criminal proceedings. This is not just about using technologies related to videoconferencing, automated distribution, digital evidence, etc. Development is constantly and rapidly moving forward, and we are now facing issues related to the use of artificial intelligence technologies in criminal proceedings. Such changes also entail new threats and challenges – we are referring to the challenges of respecting fundamental human rights and freedoms in the context of technological development. In addition, there is the matter of ensuring the implementation of basic legal principles, such as the presumption of innocence, non-discrimination and the protection of the right to privacy. This concern arises when applying artificial intelligence systems in the criminal justice system.
Methods: The general philosophical framework of this research consisted of axiological and hermeneutic approaches, which allowed us to conduct a value analysis of fundamental human rights and changes in their perception in the context of the AI application, as well as apply in-depth study and interpretation of legal texts. While building up the system of the basic principles of using AI systems in criminal justice, we used the system-structural and logical methods of research. The study also relied on the comparative law method in terms of comparing legal regulation and law enforcement practice in different legal systems. The method of legal modelling was applied to emphasise the main areas of possible application of AI systems in criminal proceedings.
Results and Conclusions: The article identifies the main possible vectors of the use of artificial intelligence systems in criminal proceedings and assesses the feasibility and prospects of their implementation. In addition, it is stated that only using AI systems for auxiliary purposes carries minimal risks of interference with human rights and freedoms. Instead, other areas of AI adoption may significantly infringe rights and freedoms, and therefore the use of AI for such purposes should be fully controlled, verified and only subsidiary, and in certain cases, prohibited altogether.
Publisher
East-European Law Research Center
Reference21 articles.
1. Bathaee Ya, ‘The Artificial Intelligence Black Box and the Failure of Intent and Causation’ (2018) 31 (2) Harvard Journal of Law & Technology 889.
2. Broadhurst R and others, Artificial Intelligence and Crime: Report of the ANU Cybercrime Observatory for the Korean Institute of Criminology (KIC, ANU Cybercrime Observatory, College of Asia and the Pacific 2019) doi: 10.2139/ssrn.3407779.
3. Burns E, ‘Machine Learning’ (TechTarget, March 2021) accessed 18 March 2023.
4. Castro D and New J, The Promise of Artificial Intelligence (Center for Data Innovation 2016) accessed 18 March 2023.
5. David Kenny, ‘How AI OpenScale Overcomes AI’s Biggest Challenges’ (IBM, 2023) accessed 22 March 2023
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献