Abstract
AbstractThe control problem related to robots and AI usually discussed is that we might lose control over advanced technologies. When authors like Nick Bostrom and Stuart Russell discuss this control problem, they write in a way that suggests that having as much control as possible is good while losing control is bad. In life in general, however, not all forms of control are unambiguously positive and unproblematic. Some forms—e.g. control over other persons—are ethically problematic. Other forms of control are positive, and perhaps even intrinsically good. For example, one form of control that many philosophers have argued is intrinsically good and a virtue is self-control. In this paper, I relate these questions about control and its value to different forms of robots and AI more generally. I argue that the more robots are made to resemble human beings, the more problematic it becomes—at least symbolically speaking—to want to exercise full control over these robots. After all, it is unethical for one human being to want to fully control another human being. Accordingly, it might be seen as problematic—viz. as representing something intrinsically bad—to want to create humanoid robots that we exercise complete control over. In contrast, if there are forms of AI such that control over them can be seen as a form of self-control, then this might be seen as a virtuous form of control. The “new control problem”, as I call it, is the question of under what circumstances retaining and exercising complete control over robots and AI is unambiguously ethically good.
Funder
Nederlandse Organisatie voor Wetenschappelijk Onderzoek
Publisher
Springer Science and Business Media LLC
Subject
General Earth and Planetary Sciences
Reference53 articles.
1. Yudkowsky, E.: Artificial intelligence as a positive and negative factor in global risk. In: Bostrom, N., Ćirković, M.M. (eds.) Global Catastrophic Risks, pp. 308–345. Oxford University Press, New York (2008)
2. Bostrom, N.: Superintelligence: Paths, Dangers, Strategies. Oxford University Press, Oxford (2014)
3. Bryson, J.J.: Robots should be slaves. In: Wilks, Y. (ed.) Close Engagements with Artificial Companions, pp. 63–74. John Benjamins, London (2010)
4. Santoni de Sio, F., van den Hoven, J.: Meaningful human control over autonomous systems: a philosophical account. Front. Robot. AI 5, 15 (2018). https://doi.org/10.3389/frobt.2018.00015
5. Nyholm, S.: Attributing agency to automated systems: reflections on human–robot collaboration and responsibility-loci. Sci. Eng. Ethics 24(4), 1201–1219 (2018)
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献