Abstract
The article discusses the problem of determining the methodological and conceptual foundations of the artificial intelligence ethics. It is shown that the principled approach is based on the theory of value embedding, which assumes that technical objects can either be the carriers of values themselves, or at least contribute to the realization of certain values. At the same time, it is highly dependent on stakeholders, and it rather declares ethics than ensures it. The person-centered approach is based on the idea of personal moral responsibility. The main problems of the personality-oriented approach are the gap in responsibility and the unpredictability of the actions of artificial intelligence. A critical approach is proposed, according to which the subject of artificial intelligence ethics is the impact of technology on people's ideas and values, their behavior and decision-making. The work introduces and discusses the concept of the scale paradox, resulting from the artificial intelligence use. This concept states that many ethically correct cases of using technology can lead to ethically unacceptable consequences. It is shown that one of the options for applying a critical approach can be the study of attitudes and stereotypes associated with artificial intelligence in the mass consciousness.
Publisher
Samara National Research University
Reference15 articles.
1. A Moral Bind? — Autonomous Weapons, Moral Responsibility, and Institutional Reality
2. Conversation from Beyond the Grave? A Neo‐Confucian Ethics of Chatbots of the Dead
3. The Ethics of Creative AI
4. Foot, P. (1967), The Problem of Abortion and the Doctrine of the Double Effect, Oxford: Review, vol. 5, pp. 1–7.
5. Howard, D. (2018), Technomoral Civic Virtues: a Critical Appreciation of Shannon Vallor’s Technology and the Virtues, Philosophy & Technology, vol. 31, pp. 293–304.