Affiliation:
1. School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0150, USA
Abstract
This paper presents a comprehensive framework addressing optimal nonlinear analysis and feedback control synthesis for nonlinear stochastic dynamical systems. The focus lies on establishing connections between stochastic Lyapunov theory and stochastic Hamilton–Jacobi–Bellman theory within a unified perspective. We demonstrate that the closed-loop nonlinear system’s asymptotic stability in probability is ensured through a Lyapunov function, identified as the solution to the steady-state form of the stochastic Hamilton–Jacobi–Bellman equation. This dual assurance guarantees both stochastic stability and optimality. Additionally, optimal feedback controllers for affine nonlinear systems are developed using an inverse optimality framework tailored to the stochastic stabilization problem. Furthermore, the paper derives stability margins for optimal and inverse optimal stochastic feedback regulators. Gain, sector, and disk margin guarantees are established for nonlinear stochastic dynamical systems controlled by nonlinear optimal and inverse optimal Hamilton–Jacobi–Bellman controllers.
Funder
Air Force Office of Scientific Research
Reference47 articles.
1. Feedback control of nonlinear systems by extended linearization;Baumann;IEEE Trans. Autom. Control,1986
2. Feedback linearization families for nonlinear systems;Wang;IEEE Trans. Autom. Control,1987
3. Blueschke, D., Blueschke-Nikolaeva, V., and Neck, R. (2021). Approximately optimal control of nonlinear dynamic stochastic problems with learning: The OPTCON algorithm. Algorithms, 14.
4. Near-optimal control of nonlinear dynamical systems: A brief survey;Zhang;Annu. Rev. Control,2019
5. Suboptimal design of intentionally nonlinear controllers;Rekasius;IEEE Trans. Autom. Control,1964