Participation, prediction, and publicity: avoiding the pitfalls of applying Rawlsian ethics to AI
-
Published:2023-09-05
Issue:
Volume:
Page:
-
ISSN:2730-5953
-
Container-title:AI and Ethics
-
language:en
-
Short-container-title:AI Ethics
Abstract
AbstractGiven the popularity of John Rawls’ theory of justice as fairness as an ethical framework in the artificial intelligence (AI) field, this article examines how the theory fits with three different conceptual applications of AI technology. First, the article discusses a proposition by Ashrafian to let an AI agent perform the deliberation that produces a Rawlsian social contract governing humans. The discussion demonstrates the inviability of such an application as it contradicts foundational aspects of Rawls’ theories. An exploration of more viable applications of Rawlsian theory in the AI context follows, introducing the distinction between intrinsic and extrinsic theoretical adherence, i.e., the difference between approaches integrating Rawlsian theory in the system design and those situating AI systems in Rawls-consistent policy/legislative frameworks. The article uses emerging AI legislation in the EU and the U.S. as well as Gabriel’s argument for adopting Rawls’ publicity criterion in the AI field as examples of extrinsic adherence to Rawlsian theory. A discussion of the epistemological challenges of predictive AI systems then illustrates some implications of intrinsic adherence to Rawlsian theory. While AI systems can make short-term predictions about human behavior with intrinsic adherence to Rawls’ theory of justice as fairness, long-term, large-scale predictions results do not adhere to the theory, but instead constitute the type of utilitarianism Rawls vehemently opposed. The article concludes with an overview of the implications of these arguments for policymakers and regulators.
Funder
University of Southern California
Publisher
Springer Science and Business Media LLC
Subject
General Earth and Planetary Sciences
Reference43 articles.
1. Jørgensen, A.K., Søgaard, A.: Rawlsian AI fairness loopholes. AI Ethics (2022). https://doi.org/10.1007/s43681-022-00226-9 2. Heidari, H., Ferrari, C., Gummadi, K., Krause, A.: Fairness behind a veil of ignorance: a welfare analysis for automated decision making. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31, pp. 1265–1276. Curran Associates, Inc., New York (2018) 3. Heidari, H., Loi, M., Gummadi, K. P., Krause, A.: A moral framework for understanding fair ml through economic models of equality of opportunity. In: Proceedings of the conference on fairness, accountability, and transparency, pp. 181–190 (2019) 4. Verdiesen, I., Dignum, V., Hoven, J.V.D.: Measuring moral acceptability in E-deliberation: A practical application of ethics by participation. ACM Trans. Internet Technol. TOIT 18(4), 1–20 (2018) 5. Santoni de Sio, F., Almeida, T., van den Hoven, J.: The future of work: freedom, justice and capital in the age of artificial intelligence. Crit. Rev. Int. Soc. Polit. Philos. (2021). https://doi.org/10.1080/13698230.2021.2008204
|
|