Abstract
The problem studied is that of controlling a finite Markov chain so as to maximize the long-run expected reward per unit time. The chain's transition probabilities depend upon an unknown parameter taking values in a subset [a, b] of Rn. A control policy is defined as the probability of selecting a control action for each state of the chain. Derived is a Taylor-like expansion formula for the expected reward in terms of policy variations. Based on that result, a recursive stochastic gradient algorithm is presented for the adaptation of the control policy at consecutive times. The gradient depends on the estimated transition parameter which is also recursively updated using the gradient of the likelihood function. Convergence with probability 1 is proved for the control and estimation algorithms.
Publisher
Cambridge University Press (CUP)
Subject
Applied Mathematics,Statistics and Probability
Reference17 articles.
1. Borkar V. and Varaiya P. (1980) Adaptive control of Markov chains. IEEE Trans. Automatic Control AC-24, 953–957.
2. An adaptive automaton controller for discrete-time markov processes
3. Estimation and control in Markov chains
Cited by
14 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献