Author:
Yazdanpanah Vahid,Gerding Enrico H.,Stein Sebastian,Dastani Mehdi,Jonker Catholijn M.,Norman Timothy J.,Ramchurn Sarvapali D.
Abstract
AbstractEnsuring the trustworthiness of autonomous systems and artificial intelligence is an important interdisciplinary endeavour. In this position paper, we argue that this endeavour will benefit from technical advancements in capturing various forms of responsibility, and we present a comprehensive research agenda to achieve this. In particular, we argue that ensuring the reliability of autonomous system can take advantage of technical approaches for quantifying degrees of responsibility and for coordinating tasks based on that. Moreover, we deem that, in certifying the legality of an AI system, formal and computationally implementable notions of responsibility, blame, accountability, and liability are applicable for addressing potential responsibility gaps (i.e. situations in which a group is responsible, but individuals’ responsibility may be unclear). This is a call to enable AI systems themselves, as well as those involved in the design, monitoring, and governance of AI systems, to represent and reason about who can be seen as responsible in prospect (e.g. for completing a task in future) and who can be seen as responsible retrospectively (e.g. for a failure that has already occurred). To that end, in this work, we show that across all stages of the design, development, and deployment of trustworthy autonomous systems (TAS), responsibility reasoning should play a key role. This position paper is the first step towards establishing a road map and research agenda on how the notion of responsibility can provide novel solution concepts for ensuring the reliability and legality of TAS and, as a result, enables an effective embedding of AI technologies into society.
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Human-Computer Interaction,Philosophy
Reference78 articles.
1. Abeywickrama DB, Cˆırstea C, Ramchurn SD (2019) Model checking human-agent collectives for responsible AI. In: 28th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2019, New Delhi, India. 1–8. New York, NY. IEEE.
2. Aire, JP, Meneguzzi F (2017) Norm conflict identification using deep learning. In: International Conference on Autonomous Agents and Multiagent Systems. 194–207. Springer.
3. Aires JP, Pinheiro D, Lima VSD, Meneguzzi F (2017) Norm conflict identification in contracts. Artific Intell Law. 25(4):397–428
4. Alechina N, Halpern JY, Logan B (2017). Causality, responsibility and blame in team plans. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2017, Sa˜o Paulo, Brazil. 1091–1099. Richland, SC. IFAAMAS.
5. Alechina N, Logan B (2020) State of the art in logics for verification of resource bounded multi-agent systems. Fields of Logic and Computation III—Essays Dedicated to Yuri Gurevich on the Occasion of His 80th Birthday. Springer, Cham, pp 9–29
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献