Author:
Suleimenov Ibragim E.,Matrassulova Dinara K.,Moldakhan Inabat,Vitulyova Yelizaveta S.,Kabdushev Sherniyaz B.,Bakirov Akhat S.
Abstract
The question of the nature of the distributed memory of neural networks is considered. Since the memory capacity of a neural network depends on the presence of feedback in its structure this question requires further study. It is shown that the neural networks without feedbacks can be exhaustively described based on analogy with the algorithms of noiseproof coding. For such networks the use of the term "memory" is not justified at all. Moreover, functioning of such networks obeys the analog of Shannon formula, first obtained in this paper. This formula allows to specify in advance the number of images that a neural network can recognize for a given code distance between them. It is shown that in the case of artificial neural networks with negative feedback it is really justified to talk about a distributed memory network. It is also shown that in this case the boundary between distributed memory of a neural network and information storage mechanisms in such elements as RS-triggers is diffuse. For the given example a specific formula is obtained, which connects the number of possible states of outputs of the network (and, hence, the capacity of its memory) with the number of its elements.
Publisher
Institute of Advanced Engineering and Science
Subject
Electrical and Electronic Engineering,Control and Optimization,Computer Networks and Communications,Hardware and Architecture,Instrumentation,Information Systems,Control and Systems Engineering,Computer Science (miscellaneous)
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献