1. Chauvin, Y. (1987). Generalization as a function of the number of hidden units in back-propatation networks. Unpublished Manuscript. University of California, San Diego, CA.
2. Chauvin, Y. (1989). A back-propagation algorithm with optimal use of the hidden units. In D. Touretzky (Ed.), Advances in Neural Information Processing Systems 1. Palo Alto, CA: Morgan Kaufman.
3. Chauvin, Y. & Rumelhart, D.E. (In Preparation). Back-propagation: Theory, architectures and applications. Hillsdale, NJ: Lawrence Erlbaum.
4. Földi%ak, P. (1989). Adaptive network for optimal linear feature extraction. Proceedings of the IJCNN International Joint Conference on Neural Networks, 1, 401–405. Washington D.C., June 18–22.
5. Golden, R.M. & Rumelhart, D.E. (1989). Improving generalization in multi-layer networks through weight decay and derivative minimization. Unpublished Manuscript. Stanford University, Palo Alto, CA.