1. C. L. Giles and C. W. Omlin, “Extraction, insertion and refinement of symbolic rules in dynamically-driven recurrent neural networks,” Connection Science, vol. 5, no. 3/4, p. 307, 1993. Special Issue on Architectures for Integrating Symbolic and Neural Processes.
2. R. L. Watrous and G. M. Kuhn, “Induction of finite-state languages using second-order recurrent networks,” Neural Computation, vol. 4, pp. 406–414, May 1992.
3. H. Siegelmann and E. Sontag, “On the computational power of neural nets,” in Proceedings of the Fifth ACM Workshop on Computational Learning Theory, (New York NY), pp. 440–449, ACM, 1992.
4. P. Frasconi, M. Gori, M. Maggini, and G. Soda, “Representation of finite state automata in recurrent radial basis function networks,” Machine Learning, vol. 23, pp. 5–32, 1996.
5. J. F. Kolen, “Recurrent networks: State machines or iterated function systems?,” in Proceedings of the 1993 Connectionist Models Summer School (M. C. Mozer, P. Smolensky, D. S. Touretzky, J. L. Elman, and A. S. Weigend, eds.), (Hillsdale NJ), pp. 203–210, Erlbaum, 1994.