Abstract
Abstract
In this paper, we extend the deterministic learning theory to sampled-data nonlinear systems. Based on the Euler approximate model, the adaptive neural network identifier with a normalized learning algorithm is proposed. It is proven that by properly setting the sampling period, the overall system can be guaranteed to be stable and partial neural network weights can exponentially converge to their optimal values under the satisfaction of the partial persistent excitation (PE) condition. Consequently, locally accurate learning of the nonlinear dynamics can be achieved, and the knowledge can be represented by using constant-weight neural networks. Furthermore, we present a performance analysis for the learning algorithm by developing explicit bounds on the learning rate and accuracy. Several factors that influence learning, including the PE level, the learning gain, and the sampling period, are investigated. Simulation studies are included to demonstrate the effectiveness of the approach.
Publisher
Springer Science and Business Media LLC
Reference35 articles.
1. Ljung L. System Identification: Theory for the User. 2nd ed. Englewood Cliffs, NJ: Prentice Hall, 1999
2. Gevers M A. A personal view of the development of system identification. IEEE Contr Syst Mag, 2006, 26: 93–105
3. Poggio T, Girosi F. Networks for approximation and learning. P IEEE, 1990, 78: 1481–1497
4. Jagannathan S. Neural Network Control of Nonlinear Discrete-Time Systems. Boca Raton: CRC Press, 2006
5. Kosmatopoulos E B, Christodoulou M A, Ioannou P A. Dynamical neural networks that ensure exponential identification error convergence. Neural Netw, 1997, 10: 299–314
Cited by
30 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献