Abstract
AbstractAutonomous systems are machines that can alter their behavior without direct human oversight or control. How ought we to program them to behave? A plausible starting point is given by the Reduction to Acts Thesis, according to which we ought to program autonomous systems to do whatever a human agent ought to do in the same circumstances. Although the Reduction to Acts Thesis is initially appealing, we argue that it is false: it is sometimes permissible to program a machine to do something that it would be wrong for a human to do. We advance two main arguments for this claim. First, the way an autonomous system will behave can be known in advance. This knowledge can indirectly affect the behavior of other agents, while the same may not be true at the time the system actually executes its programming. Second, a lack of knowledge of the identities of the victims and beneficiaries can provide a justification during the programming phase that would be unavailable to an agent at the time the autonomous system executes its programming.
Publisher
Springer Science and Business Media LLC
Reference30 articles.
1. Broome, J. (1990–1991a). Fairness. Proceedings of the Aristotelian Society, 91(1), 87–102.
2. Broome, J. (1991). Weighing goods: Equality, uncertainty and time. Oxford: Basil Blackwell.
3. familycuisine.net. (2021). The best toasters we tested in 2021. https://familycuisine.net/whats-the-best-toaster/. Accessed 2022-03-08.
4. Fleurbaey, M. (2009). Two variants of Harsanyi’s aggregation theorem. Economics Letters, 105(3), 300–302.
5. Fleurbaey, M., & Voorhoeve, A. (2013). Decide as you would with full information! an argument against ex-ante pareto. In N. Eyal, S. A. Hurst, O. F. Norheim, & D. Wikler (Eds.), Inequalities in Health: Concepts, Measures, and Ethics, chapter 8 (pp. 113–128). New York: Oxford University Press.