Abstract
AbstractThis article is about the role of factual uncertainty for moral decision-making as it concerns the ethics of machine decision-making (i.e., decisions by AI systems, such as autonomous vehicles, autonomous robots, or decision support systems). The view that is defended here is that factual uncertainties require a normative evaluation and that ethics of machine decision faces a triple-edged problem, which concerns what a machine ought to do, given its technical constraints, what decisional uncertainty is acceptable, and what trade-offs are acceptable to decrease the decisional uncertainty.
Funder
Marianne and Marcus Wallenberg Foundation
Trafikverket
Umea University
Publisher
Springer Science and Business Media LLC
Subject
General Social Sciences,Philosophy
Reference63 articles.
1. AI HLEG (High-Level Expert Group on Artificial Intelligence). (2019). Ethics guidelines for trustworthy AI. https://ec.europa.eu/futurium/en/ai-alliance-consultation.
2. Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Sharif, A., Bonnefon, J.-F., & Rahwan, I. (2018). The moral machine experiment. Nature, 563(7729), 59–64. https://doi.org/10.1038/s41586-018-0637-6
3. Borenstein, J., Herkert, J. R., & Miller, K. W. (2019). AVs and engineering ethics: The need for a system level analysis. Science & Engineering Ethics, 25(2), 383–398. https://doi.org/10.1007/s11948-017-0006-0
4. Bradley, R., & Drechsler, M. (2014). Types of uncertainty. Erkenntnis, 79(6), 1225–1248. https://doi.org/10.1007/s10670-013-9518-4
5. Brey, P., Lundgren, B., Macnish, K., & Ryan, M. (2019). Guidelines for the development and the use of SIS. Deliverable D3.2 of the SHERPA project. https://doi.org/10.21253/DMU.11316833.v3.
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献