Abstract
AbstractWe consider task planning for long-living intelligent agents situated in dynamic environments. Specifically, we address the problem of incomplete knowledge of the world due to the addition of new objects with unknown action models. We propose a multilayered agent architecture that uses meta-reasoning to control hierarchical task planning and situated learning, monitor expectations generated by a plan against world observations, forms goals and rewards for the situated reinforcement learner, and learns the missing planning knowledge relevant to the new objects. We use occupancy grids as a low-level representation for the high-level expectations to capture changes in the physical world due to the additional objects, and provide a similarity method for detecting discrepancies between the expectations and the observations at run-time; the meta-reasoner uses these discrepancies to formulate goals and rewards for the learner, and the learned policies are added to the hierarchical task network plan library for future re-use. We describe our experiments in the Minecraft and Gazebo microworlds to demonstrate the efficacy of the architecture and the technique for learning. We test our approach against an ablated reinforcement learning (RL) version, and our results indicate this form of expectation enhances the learning curve for RL while being more generic than propositional representations.
Publisher
Cambridge University Press (CUP)
Subject
Artificial Intelligence,Software
Reference50 articles.
1. Ng A. Y. , Harada D. & Russell S. 1999. Policy invariance under reward transformations: theory and application to reward shaping. ICML 99, 278–287.
2. Metareasoning
3. Paisner M. , Maynord M. , Cox M. T. & Perlis D. 2013. Goal-driven autonomy in dynamic environments. In Goal Reasoning: Papers from the ACS Workshop, 79.
4. Case-based planning
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献