Abstract
In this work, we consider the time discretization of stochastic optimal control problems. Under general assumptions on the data, we prove the convergence of the value functions associated with the discrete time problems to the value function of the original problem. Moreover, we prove that any sequence of optimal solutions of discrete problems is minimizing for the continuous one. As a consequence of the Dynamic Programming Principle for the discrete problems, the minimizing sequence can be taken in discrete time feedback form.
Subject
Computational Mathematics,Control and Optimization,Control and Systems Engineering
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献