Affiliation:
1. School of Computing and Engineering, University of Huddersfield, United Kingdom
2. School of Computer Science, University of Lincoln, United Kingdom
Abstract
Artificial Intelligence (AI) is being increasingly deployed in practical applications. However, there is a major concern whether AI systems will be trusted by humans. To establish trust in AI systems, there is a need for users to understand the reasoning behind their solutions. Therefore, systems should be able to explain and justify their output. Explainable AI Planning is a field that involves explaining the outputs, i.e., solution plans produced by AI planning systems to a user. The main goal of a plan explanation is to help humans understand reasoning behind the plans that are produced by the planners. In this article, we propose an argument scheme-based approach to provide explanations in the domain of AI planning. We present novel argument schemes to create arguments that explain a plan and its key elements and a set of critical questions that allow interaction between the arguments and enable the user to obtain further information regarding the key elements of the plan. Furthermore, we present a novel dialogue system using the argument schemes and critical questions for providing interactive dialectical explanations.
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Theoretical Computer Science
Reference38 articles.
1. Practical reasoning as presumptive argumentation using action based alternating transition systems
2. Explanation in AI and law: Past, present and future
3. Alexandros Belesiotis, Michael Rovatsos, and Iyad Rahwan. 2010. Agreeing on plans through iterated disputes. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS’10). IFAAMAS, 765–772.
4. Persuasion in Practical Argument Using Value-based Argumentation Frameworks