Abstract
Increasingly, applications with AI elements are being used not only in the technical field, to improve the efficiency of services provided by the private and public sectors,but also to make decisions that directly affect the lives of citizens. However, like any technological solution, AI has both positive and negative results, which are only beginning to be understood by social scientists.
The purpose of the article is to identify the political and legal consequences of AI application and to analyze the legal mechanisms for ensuring its safe use based on the experience of foreign countries.
As AI systems prove to be increasingly useful in the real world, they expand their scope of application, which leads to an increase in the risks of abuse. The consequences of losing effective control over them are of growing concern. Automated decision making can lead to distorted results that repeat and reinforce existing biases. There is an aura of neutrality and impartiality associated with AI decision-making, resulting in these systems being accepted as objective, even though they may be the result of biased historical decisions or even outright discrimination. Without transparency about the data or the AI algorithms that interpret it, the public may be left in the dark about how decisions that have a significant impact on their lives are made.
Awareness of the dangers of uncontrolled AI use has led a number of countries to seek legal instruments to minimize the negative consequences of its use. The European Union is the closest to introducing basic standards for AI regulation. A draft of Artificial Intelligence Act was published in 2021 and classifies the risks of using AI intofour categories: unacceptable, high-risk, limited, and minimal. Once adopted, the AI Act will be the first horizontal legislative act in the EU to regulate AI systems, introducing rules for the safe and secure placement of AI-enabled products on the EU market. Taking into account the European experience and Ukrainian specifics in domestic legislation on the use of digital technologies should facilitate both adaptation to the European legal space and promote the development of the technology sector in the country.
Key words: artifi cial intelligence, algorithms, discrimination, disinformation, democracy.
Publisher
Koretsky Institute of State and Law of National Academy of Sciences of Ukraine
Reference13 articles.
1. 1. Oxford Insights Government AI Readiness Index 2022. URL: https://www.oxfordinsights.com/government-ai-readiness-index-2022
2. 2. Artificial Intelligence and Life in 2030. URL: https://ai100.stanford.edu/sites/g/files/sbiybj18871/files/media/file/ai100report10032016fnl_singles.pdf
3. 3. How Tax is leveraging AI – Including machinelearning – In 2019. URL: https://www.pwc.com/gx/en/tax/publications/assets/how-taxleveraging-ai-machine-learning-2019.pdf
4. 4. Buchanan B., Lohn A., Musser M., Sedova K.Truth, Lies, and Automation: How Language Models Could Change Disinformation. URL:https://cset.georgetown.edu/publication/truth-lies-and-automation/
5. 5. Christian B. The Alignment Problem: Machine Learning and Human Values. N.Y.: W. W. Norton &Company, 2020. 496 p.