Affiliation:
1. School of Computer Science, University of Bristol, Bristol, United Kingdom of Great Britain and Northern Ireland
Abstract
Responsible AI must be able to make or support decisions that consider human values and can be justified by human morals. Accommodating values and morals in responsible decision making is supported by adopting a perspective of
macro ethics
, which views ethics through a holistic lens incorporating social context. Normative ethical principles inferred from philosophy can be used to methodically reason about ethics and make ethical judgements in specific contexts. Operationalising normative ethical principles thus promotes responsible reasoning under the perspective of macro ethics. We survey AI and computer science literature and develop a taxonomy of 21 normative ethical principles which can be operationalised in AI. We describe how each principle has previously been operationalised, highlighting key themes that AI practitioners seeking to implement ethical principles should be aware of. We envision that this taxonomy will facilitate the development of methodologies to incorporate normative ethical principles in reasoning capacities of responsible AI systems.
Funder
EPSRC Doctoral Training Partnership
UKRI EPSRC
Publisher
Association for Computing Machinery (ACM)
Reference142 articles.
1. Keith Abney. 2011. Robots, ethical theory, and Metaethics: A guide for the perplexed. In Robot Ethics, The Ethical and Social Implications of Robotics, Patrick Lin, Keith Abney, and George A. Bekey (Eds.). MIT Press, Cambridge, 35–52.
2. Developing an Ethical Framework for Responsible Artificial Intelligence (AI) and Machine Learning (ML) Applications in Cryptocurrency Trading: A Consequentialism Ethics Analysis
3. Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches