Affiliation:
1. College of Intelligence Science and Technology National University of Defense Technology Changsha China
Abstract
AbstractIn this paper, we propose a structural developmental neural network to address the plasticity‐stability dilemma, computational inefficiency, and lack of prior knowledge in continual unsupervised learning. This model uses competitive learning rules and dynamic neurons with information saturation to achieve parameter adjustment and adaptive structure development. Dynamic neurons adjust the information saturation after winning the competition and use this parameter to modulate the neuron parameter adjustment and the division timing. By dividing to generate new neurons, the network not only keeps sensitive to novel features but also can subdivide classes learnt repeatedly. The dynamic neurons with information saturation and division mechanism can simulate the long short‐term memory of the human brain, which enables the network to continually learn new samples while maintaining the previous learning results. The parent‐child relationship between neurons arising from neuronal division enables the network to simulate the human cognitive process that gradually refines the perception of objects. By setting the clustering layer parameter, users can choose the desired degree of class subdivision. Experimental results on artificial and real‐world datasets demonstrate that the proposed model is feasible for unsupervised learning tasks in instance increment and class increment scenarios and outperforms prior structural developmental neural networks.
Funder
National Natural Science Foundation of China
Publisher
Institution of Engineering and Technology (IET)
Subject
Artificial Intelligence,Computer Networks and Communications,Computer Vision and Pattern Recognition,Human-Computer Interaction,Information Systems
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献