1. Embrechts, M. J., Hargis, B. J., & Linton, J. D. (2010). An augmented efficient backpropagation training strategy for deep autoassociative neural networks. In Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN), Barcelona, Spain, 18–23 July (pp. 1–6). doi: 10.1109/IJCNN.2010. 5596828
2. Gatti, C. J., Embrechts, M. J., & Linton, J. D. (2013). An empirical analysis of reinforcement learning using design of experiments. In Proceedings of the
$21st$
European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), Bruges, Belgium, 24–26 April (pp. 221–226). Bruges, Belgium: ESANN.
3. LeCun,Y., Bottou, L., Orr, G.,, & Müller, K. (1998). Efficient backprop. In Orr, G. & Müller, K. (Eds.), Neural Networks: Tricks of the Trade, volume 1524 (pp. 5–50). Berlin: Springer.
4. Moore, A. W. (1990). Efficient memory-based learning for robot control. Unpublished PhD dissertation, University of Cambridge, Cambridge, United Kingdom.
5. Patist, J. P. & Wiering, M. (2004). Learning to play draughts using temporal difference learning with neural networks and databases. In Proceedings of the 13th Belgian-Dutch Conference on Machine Learning, Brussels, Belgium, 8–9 January (pp. 87–94). doi: 10.1007/978-3-540-88190-2_13