Abstract
This paper introduces an inclusive class of fixed‐time stable continuous‐time gradient flows (GFs). This class of GFs is then leveraged to learn optimal control solutions for nonlinear systems in fixed time. It is shown that the presented GF guarantees convergence within a fixed time from any initial condition to the exact minimum of functions that satisfy the Polyak–Łojasiewicz (PL) inequality. The presented fixed‐time GF is then utilized to design fixed‐time optimal adaptive control algorithms. To this end, a fixed‐time reinforcement learning (RL) algorithm is developed on the basis of a single network adaptive critic (SNAC) to learn the solution to an infinite‐horizon optimal control problem in a fixed‐time convergent, online, adaptive, and forward‐in‐time manner. It is shown that the PL inequality in the presented RL algorithm amounts to a mild inequality condition on a few collected samples. This condition is much weaker than the standard persistence of excitation (PE) and finite duration PE that relies on a rank condition of a dataset. This is crucial for learning‐enabled control systems as control systems can commit to learning an optimal controller from the beginning, in sharp contrast to existing results that rely on the PE and rank condition, and can only commit to learning after rich data samples are collected. Simulation results are provided to validate the performance and efficacy of the presented fixed‐time RL algorithm.
Reference50 articles.
1. Numerical optimization;Wright S.;Springer Science,1999
2. DaneriS.andSavaréG. Lecture notes on gradient flows and optimal transport 2010 https://arxiv.org/abs/1009.3737.
3. {Euclidean, metric, and Wasserstein} gradient flows: an overview