Abstract
This work is devoted to the modeling and investigation of the architecture design for the delayed recurrent neural network, based on the delayed differential equations. The usage of discrete and distributed delays makes it possible to model the calculation of the next states using internal memory, which corresponds to the artificial recurrent neural network architecture used in the field of deep learning. The problem of exponential stability of the models of recurrent neural networks with multiple discrete and distributed delays is considered. For this purpose, the direct method of stability research and the gradient descent method is used. The methods are used consequentially. Firstly we use the direct method in order to construct stability conditions (resulting in an exponential estimate), which include the tuple of positive definite matrices. Then we apply the optimization technique for these stability conditions (or of exponential estimate) with the help of a generalized gradient method with respect to this tuple of matrices. The exponential estimates are constructed on the basis of the Lyapunov–Krasovskii functional. An optimization method of improving estimates is offered, which is based on the notion of the generalized gradient of the convex function of the tuple of positive definite matrices. The search for the optimal exponential estimate is reduced to finding the saddle point of the Lagrange function.
Subject
Physics and Astronomy (miscellaneous),General Mathematics,Chemistry (miscellaneous),Computer Science (miscellaneous)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献