Learning Classical Planning Strategies with Policy Gradient
-
Published:2021-05-25
Issue:
Volume:29
Page:637-645
-
ISSN:2334-0843
-
Container-title:Proceedings of the International Conference on Automated Planning and Scheduling
-
language:
-
Short-container-title:ICAPS
Author:
Gomoluch Paweł,Alrajeh Dalal,Russo Alessandra
Abstract
A common paradigm in classical planning is heuristic forward search. Forward search planners often rely on simple best-first search which remains fixed throughout the search process. In this paper, we introduce a novel search framework capable of alternating between several forward search approaches while solving a particular planning problem. Selection of the approach is performed using a trainable stochastic policy, mapping the state of the search to a probability distribution over the approaches. This enables using policy gradient to learn search strategies tailored to a specific distributions of planning problems and a selected performance metric, e.g. the IPC score. We instantiate the framework by constructing a policy space consisting of five search approaches and a two-dimensional representation of the planner’s state. Then, we train the system on randomly generated problems from five IPC domains using three different performance metrics. Our experimental results show that the learner is able to discover domain-specific search strategies, improving the planner’s performance relative to the baselines of plain bestfirst search and a uniform policy.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. An AI Planning Approach to Factory Production Planning and Scheduling;2022 International Conference on Machine Learning and Knowledge Engineering (MLKE);2022-02