Abstract
AbstractThis paper tackles the open problem of value alignment in multi-agent systems. In particular, we propose an approach to build an ethical environment that guarantees that agents in the system learn a joint ethically-aligned behaviour while pursuing their respective individual objectives. Our contributions are founded in the framework of Multi-Objective Multi-Agent Reinforcement Learning. Firstly, we characterise a family of Multi-Objective Markov Games (MOMGs), the so-called ethical MOMGs, for which we can formally guarantee the learning of ethical behaviours. Secondly, based on our characterisation we specify the process for building single-objective ethical environments that simplify the learning in the multi-agent system. We illustrate our process with an ethical variation of the Gathering Game, where agents manage to compensate social inequalities by learning to behave in alignment with the moral value of beneficence.
Funder
HORIZON EUROPE Framework Programme
Horizon 2020 Framework Programme
Fundación para la Formación e Investigación Sanitarias de la Región de Murcia
Ministerio de Asuntos Económicos y Transformación Digital, Gobierno de España
Ministerio de Ciencia, Innovación y Universidades
Consejo Superior de Investigaciones Cientificas
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Software
Reference80 articles.
1. Abbeel P, Ng AY (2004) Apprenticeship learning via inverse reinforcement learning. In: Proceedings of the Twenty-first International Conference on Machine Learning, ICML ’04. ACM, New York, NY, USA. https://doi.org/10.1145/1015330.1015430
2. Abel D, MacGlashan J, Littman ML (2016) Reinforcement learning as a framework for ethical decision making. In: AAAI Workshops: AI, Ethics, and Society, Association for the Advancement of Artificial Intelligence, vol 92
3. Allen C, Smit I, Wallach W (2005) Artificial morality: top–down, bottom–up, and hybrid approaches. Ethics Inform Technol 7:149–155. https://doi.org/10.1007/s10676-006-0004-4
4. Alshiekh M, Bloem R, Ehlers R, Könighofer B, Niekum S, Topcu U (2018) Safe reinforcement learning via shielding. In: Proceedings of the Thirty-Second AAAI conference on artificial intelligence
5. Amodei D, Olah C, Steinhardt J, Christiano PF, Schulman J, Mané D (2016) Concrete problems in ai safety. CoRR abs/1606.06565