Affiliation:
1. Department of Cognitive Science, D-015, University of California, San Diego, La Jolla, California 92093, USA
Abstract
Recurrent connections in neural networks potentially allow information about events occurring in the past to be preserved and used in current computations. How effectively this potential is realized depends on the power of the learning algorithm used. As an example of a task requiring recurrency, Servan-Schreiber, Cleeremans, and McClelland1 have applied a simple recurrent learning algorithm to the task of recognizing finite-state grammars of increasing difficulty. These nets showed considerable power and were able to learn fairly complex grammars by emulating the state machines that produced them. However, there was a limit to the difficulty of the grammars that could be learned. We have applied a more powerful recurrent learning procedure, called real-time recurrent learning2,6 (RTRL), to some of the same problems studied by Servan-Schreiber, Cleeremans, and McClelland. The RTRL algorithm solved more difficult forms of the task than the simple recurrent networks. The internal representations developed by RTRL networks revealed that they learn a rich set of internal states that represent more about the past than is required by the underlying grammar. The dynamics of the networks are determined by the state structure and are not chaotic.
Publisher
World Scientific Pub Co Pte Lt
Subject
Computer Networks and Communications,General Medicine
Cited by
26 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献