Abstract
AbstractThe ethics of autonomous vehicles (AV) has received a great amount of attention in recent years, specifically in regard to their decisional policies in accident situations in which human harm is a likely consequence. Starting from the assumption that human harm is unavoidable, many authors have developed differing accounts of what morality requires in these situations. In this article, a strategy for AV decision-making is proposed, the Ethical Valence Theory, which paints AV decision-making as a type of claim mitigation: different road users hold different moral claims on the vehicle’s behavior, and the vehicle must mitigate these claims as it makes decisions about its environment. Using the context of autonomous vehicles, the harm produced by an action and the uncertainties connected to it are quantified and accounted for through deliberation, resulting in an ethical implementation coherent with reality. The goal of this approach is not to define how moral theory requires vehicles to behave, but rather to provide a computational approach that is flexible enough to accommodate a number of ‘moral positions’ concerning what morality demands and what road users may expect, offering an evaluation tool for the social acceptability of an autonomous vehicle’s ethical decision making.
Funder
Agence Nationale de la Recherche
Publisher
Springer Science and Business Media LLC
Subject
Management of Technology and Innovation,Health Policy,Issues, ethics and legal aspects,Health (social science)
Reference73 articles.
1. Airbib, J., & Seba, T. (2017). Rethinking transportation 2020–2030: The disruption of transportation and the collapse of the internal-combustion vehicle and oil industries. RethinkX: Rethink Transportation.
2. Arkin, R. (2009). Governing lethal behavior in autonomous robots. Boca Raton: CRC Press.
3. Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., et al. (2018). The moral machine experiment. Nature, 563(7729), 59.
4. Bahouth, G., Graygo, J., Digges, K., Schulman, C., & Baur, P. (2014). The benefits and tradeoffs for varied high-severity injury risk thresholds for advanced automatic crash notification systems. Traffic Injury Prevention, 15(1), S134–S140.
5. Bhargava, V., & Kim, T. W. (2017). Autonomous vehicles and moral uncertainty autonomous vehicles and moral uncertainty. In Robot ethics 2.0: From autonomous cars to artificial intelligence (pp. 5–19).
Cited by
37 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献