Affiliation:
1. Department of Computer Science, University of Colorado, Campus Box 430, Boulder, CO 80309 USA
Abstract
Previous neural network learning algorithms for sequence processing are computationally expensive and perform poorly when it comes to long time lags. This paper first introduces a simple principle for reducing the descriptions of event sequences without loss of information. A consequence of this principle is that only unexpected inputs can be relevant. This insight leads to the construction of neural architectures that learn to “divide and conquer” by recursively decomposing sequences. I describe two architectures. The first functions as a self-organizing multilevel hierarchy of recurrent networks. The second, involving only two recurrent networks, tries to collapse a multilevel predictor hierarchy into a single recurrent net. Experiments show that the system can require less computation per time step and many fewer training sequences than conventional training algorithms for recurrent nets.
Subject
Cognitive Neuroscience,Arts and Humanities (miscellaneous)
Cited by
208 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献