Affiliation:
1. Università Carlo Cattaneo – LIUc
Abstract
Objective: emergence of digital technologies such as Artificial intelligence became a challenge for states across the world. It brought many risks of the violations of human rights, including right to privacy and the dignity of the person. That is why it is highly relevant to research in this area. That is why this article aims to analyse the role played by algorithms in discriminatory cases. It focuses on how algorithms may implement biased decisions using personal data. This analysis helps assess how the Artificial Intelligence Act proposal can regulate the matter to prevent the discriminatory effects of using algorithms.Methods: the methods used were empirical and comparative analysis. Comparative analysis allowed to compare regulation of and provisions of Artificial Intelligence Act proposal. Empirical analysis allowed to analyse existing cases that demonstrate us algorithmic discrimination.Results: the study’s results show that the Artificial Intelligence Act needs to be revised because it remains on a definitional level and needs to be sufficiently empirical. Author offers the ideas of how to improve it to make more empirical.Scientific novelty: the innovation granted by this contribution concerns the multidisciplinary study between discrimination, data protection and impact on empirical reality in the sphere of algorithmic discrimination and privacy protection.Practical significance: the beneficial impact of the article is to focus on the fact that algorithms obey instructions that are given based on the data that feeds them. Lacking abductive capabilities, algorithms merely act as obedient executors of the orders. Results of the research can be used as a basis for further research in this area as well as in law-making process.
Publisher
Kazan Innovative University named after V. G. Timiryasov
Reference88 articles.
1. Abdollahpouri, H., Mansoury, M., Burke, R., & Mobasher, B. (2020). The connection between popularity bias, calibration, and fairness in recommendation. In Proceedings of the 14th ACM Conference on Recommender Systems (pp. 726–731). https://doi.org/10.1145/3383313.3418487
2. Ainis, M. (2015). La piccola eguaglianza. Einaudi.
3. Alpa, G. (2021). Quale modello normativo europeo per l’intelligenza artificiale? Contratto e impresa, 37(4), 1003–1026.
4. Alpa, G., & Resta, G. (2006). Trattato di diritto civile. Le persone e la famiglia: 1. Le persone fisiche ei diritti della personalità. UTET giuridica.
5. Altenried, M. (2020). The platform as factory: Crowdwork and the hidden labour behind artificial intelligence. Capital & Class, 44(2), 145–158. https://doi.org/10.1177/0309816819899410
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献