1. Alhama, R. G., & Zuidema, W. (2016). Pre-wiring and pre-training: What does a neural network need to learn truly general identity rules?. In T. R. Besold, A. Bordes, A. d’Avila Garcez, & G. Wayne (Eds.) Proceedings of the Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches (CoCo at NIPS2016).
http://ceur-ws.org/Vol-1773/
(Vol. 1773 pp. 26–35).
2. Alhama, R. G., & Zuidema, W. (2018). Pre-wiring and pre-training: What does a neural network need to learn truly general identity rules? Journal of Artificial Intelligence Research, 61, 927– 946.
3. Alishahi, A., Barking, M., & Chrupala, G. (2017). Encoding of phonology in a recurrent neural model of grounded speech. In R. Levy, & L. Specia (Eds.) Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017) (pp. 68–378). Vancouver: Association for Computational Linguistics.
4. Altmann, G. T., & Dienes, Z. (1999). Rule learning by seven-month-old infants and neural networks. Science, 284(5416), 875–875.
5. Altmann, G. T. (2002). Learning and development in neural networks—the importance of prior experience. Cognition, 85(2), B43–B50.