Abstract
The Receding Horizon Control (RHC) strategy consists in replacing an infinite-horizon stabilization problem by a sequence of finite-horizon optimal control problems, which are numerically more tractable. The dynamic programming principle ensures that if the finite-horizon problems are formulated with the exact value function as a terminal penalty function, then the RHC method generates an optimal control. This article deals with the case where the terminal cost function is chosen as a cut-off Taylor approximation of the value function. The main result is an error rate estimate for the control generated by such a method, when compared with the optimal control. The obtained estimate is of the same order as the employed Taylor approximation and decreases at an exponential rate with respect to the prediction horizon. To illustrate the methodology, the article focuses on a class of bilinear optimal control problems in infinite-dimensional Hilbert spaces.
Funder
H2020 European Research Council
Subject
Computational Mathematics,Control and Optimization,Control and Systems Engineering
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献