Abstract
AbstractWith recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural (‘SLEEC’) nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns in concrete applications of autonomous agents requires the refinement of normative principles into explicitly formulated practical rules. This paper develops a process for deriving specification rules from a set of high-level norms, thereby bridging the gap between normative principles and operational practice. This enables autonomous agents to select and execute the most normatively favourable action in the intended context premised on a range of underlying relevant normative principles. In the translation and reduction of normative principles to SLEEC rules, we present an iterative process that uncovers normative principles, addresses SLEEC concerns, identifies and resolves SLEEC conflicts, and generates both preliminary and complex normatively-relevant rules, thereby guiding the development of autonomous agents and better positioning them as normatively SLEEC-sensitive or SLEEC-compliant.
Funder
UK Research and Innovation
Royal Academy of Engineering
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Philosophy
Reference90 articles.
1. Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology, 7(3), 149–155.
2. Allen, C., Varner, G., & Zinser, J. (2020). Prolegomena to any future artificial moral agent. In W. Wallach & P. Asaro (Eds.), Machine ethics and robot ethics (pp. 53–63). Routledge.
3. Anderson, M., & Anderson, S. L. (2007). Machine ethics: Creating an ethical intelligent agent. AI Magazine, 28(4), 15–15.
4. Asaro, P. M. (2006). What should we want from a Robot Ethic? The International Review of Information Ethics, 6, 9–16.
5. Audi, R. (2004). The good in the right: A theory of intuition and intrinsic value. Princeton University Press.
Cited by
15 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Logical Formalisms for Ethics;Proceedings of the 2024 International Conference on Information Technology for Social Good;2024-09-04
2. Toolkit for specification, validation and verification of social, legal, ethical, empathetic and cultural requirements for autonomous agents;Science of Computer Programming;2024-09
3. The perfect technological storm: artificial intelligence and moral complacency;Ethics and Information Technology;2024-08-03
4. Normative Requirements Operationalization with Large Language Models;2024 IEEE 32nd International Requirements Engineering Conference (RE);2024-06-24
5. Human empowerment in self-adaptive socio-technical systems;Proceedings of the 19th International Symposium on Software Engineering for Adaptive and Self-Managing Systems;2024-04-15