Affiliation:
1. School of Mathematics and Statistics Beijing Institute of Technology Beijing China
Abstract
ABSTRACTThis article proposes a novel method to accelerate the boundary feedback control design of cascaded parabolic difference equations (PDEs) through DeepONet. The backstepping method has been widely used in boundary control problems of PDE systems, but solving the backstepping kernel function can be time‐consuming. To address this, a neural operator (NO) learning scheme is leveraged for accelerating the control design of cascaded parabolic PDEs. DeepONet, a class of deep neural networks designed for approximating nonlinear operators, has shown potential for approximating PDE backstepping designs in recent studies. Specifically, we focus on approximating gain kernel PDEs for two cascaded parabolic PDEs. We utilize neural operators to map only two kernel functions, while the other two are computed using the analytical solution, thus simplifying the training process. We establish the continuity and boundedness of the kernels, and demonstrate the existence of arbitrarily close DeepONet approximations to the kernel PDEs. Furthermore, we demonstrate that the DeepONet approximation gain kernels ensure stability when replacing the exact backstepping gain kernels. Notably, DeepONet operator exhibits computation speeds two orders of magnitude faster than PDE solvers for such gain functions, and their theoretically proven stabilizing capability is validated through simulations.
Funder
National Natural Science Foundation of China