Affiliation:
1. School of Automation Beijing Institute of Technology Beijing China
2. Zhongyuan University of Technology Zhengzhou China
Abstract
AbstractThis paper proposes a novel learning‐based model predictive control (LMPC) scheme for discrete‐time nonlinear systems. It overcomes the challenge of manually designing the terminal conditions for traditional MPC and enhances the control performance. The scheme employs the value iteration (VI) in reinforcement learning (RL), and autonomously designs the terminal cost by iteratively performing value function learning and policy update under known dynamics and constraints. In contrast to the existing schemes that combine RL with MPC, the proposed scheme explicitly considers the approximation errors in each iteration. Further, a rigorous theoretical analysis is provided, including the convergence of VI, the stability and performance of the closed‐loop system. In addition, the influences of the prediction horizon and the initial terminal cost on performance are also investigated. Simulation results of a linear system verify the theoretical properties of the LMPC and show that it achieves (near‐)optimal performance. Moreover, its unique superiority over traditional MPC is fully demonstrated in a nonholonomic vehicle regulation example.
Funder
Natural Science Foundation of Beijing Municipality
National Natural Science Foundation of China
Subject
Electrical and Electronic Engineering,Industrial and Manufacturing Engineering,Mechanical Engineering,Aerospace Engineering,Biomedical Engineering,General Chemical Engineering,Control and Systems Engineering