Abstract
AbstractGoal recognisers attempt to infer an agent’s intentions from a sequence of observed actions. This is an important component of intelligent systems that aim to assist or thwart actors; however, there are many challenges to overcome. For example, the initial state of the environment could be partially unknown, and agents can act suboptimally and observations could be missing. Approaches that adapt classical planning techniques to goal recognition have previously been proposed, but, generally, they assume the initial world state is accurately defined. In this paper, a state is inaccurate if any fluent’s value is unknown or incorrect. Our aim is to develop a goal recognition approach that is as accurate as the current state-of-the-art algorithms and whose accuracy does not deteriorate when the initial state is inaccurately defined. To cope with this complication, we propose solving goal recognition problems by means of an Action Graph. An Action Graph models the dependencies, i.e. order constraints, between all actions rather than just actions within a plan. Leaf nodes correspond to actions and are connected to their dependencies via operator nodes. After generating an Action Graph, the graph’s nodes are labelled with their distance from each hypothesis goal. This distance is based on the number and type of nodes traversed to reach the node in question from an action node that results in the goal state being reached. For each observation, the goal probabilities are then updated based on either the distance the observed action’s node is from each goal or the change in distance. Our experimental results, for 15 different domains, demonstrate that our approach is robust to inaccuracies within the defined initial state.
Funder
Fonds Wetenschappelijk Onderzoek
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Hardware and Architecture,Human-Computer Interaction,Information Systems,Software
Reference68 articles.
1. Amado L, Pereira RF, Aires J, et al. (2018) Goal recognition in latent space. In: International joint conference on neural networks. IEEE, IJCNN, pp. 1–8, https://doi.org/10.1109/IJCNN.2018.8489653
2. Asai M (2019) Unsupervised grounding of plannable first-order logic representation from images. In: Proceedings of the twenty-ninth international conference on automated planning and scheduling. AAAI Press, ICAPS’19, pp. 583–591
3. Bisson F, Larochelle H, Kabanza F (2015) Using a recursive neural network to learn an agent’s decision model for plan recognition. In: Proceedings of the twenty-fourth international joint conference on artificial intelligence. AAAI Press, IJCAI’15, pp. 918–924, http://dl.acm.org/citation.cfm?id=2832249.2832376
4. Buyukgoz S, Grosinger J, Chetouani M et al (2022) Two ways to make your robot proactive: reasoning about human intentions or reasoning about possible futures. Front Robot AI. https://doi.org/10.3389/frobt.2022.929267
5. Chen J, Chen Y, Xu Y et al (2013) A planning approach to the recognition of multiple goals. Int J Intell Syst 28(3):203–216. https://doi.org/10.1002/int.21565