Abstract
Ye [2011] showed recently that the simplex method with Dantzig’s pivoting rule, as well as Howard’s
policy iteration
algorithm, solve discounted Markov decision processes (MDPs), with a constant discount factor, in strongly polynomial time. More precisely, Ye showed that both algorithms terminate after at most
O
(
mn
1−
γ
log
n
1−
γ
) iterations, where
n
is the number of states,
m
is the total number of actions in the MDP, and 0 <
γ
< 1 is the discount factor. We improve Ye’s analysis in two respects. First, we improve the bound given by Ye and show that Howard’s policy iteration algorithm actually terminates after at most
O
(
m
1−
γ
log
n
1−
γ
) iterations. Second, and more importantly, we show that the same bound applies to the number of iterations performed by the
strategy iteration
(or
strategy improvement
) algorithm, a generalization of Howard’s policy iteration algorithm used for solving 2-player turn-based
stochastic games
with discounted zero-sum rewards. This provides the first strongly polynomial algorithm for solving these games, solving a long standing open problem. Combined with other recent results, this provides a complete characterization of the complexity the standard strategy iteration algorithm for 2-player turn-based stochastic games; it is strongly polynomial for a fixed discount factor, and exponential otherwise.
Funder
Center for Algorithmic Game Theory
Sino-Danish Center for the Theory of Interactive Computation
Israel Science Foundation
Carlsbergfondet
Danish National Research Foundation
Center for Research in the Foundations of Electronic Markets
Google
Danish Strategic Research Council
National Natural Science Foundation of China
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Hardware and Architecture,Information Systems,Control and Systems Engineering,Software
Cited by
34 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献