1. For a general introduction to the field of artificial intelligence (AI), see Russell (1995). For a handbook on experimental and theoretical neuroscience, see Arbib (2002). For exemplary textbooks on neuroscience, see Dayan (2001) and for an introduction to neural networks, see Ballard (2000).
2. Somewhat more specialized books for further reading regarding the modeling of cognitive processes by small neural networks is that by McLeod et al. (1998) and on computational neuroscience that by O’Reilly (2000).
3. For some relevant review articles on dynamical modeling in neuroscience the following are recommended: Rabinovich et al. (2006); on reinforcement learning Kaelbling et al. (1996), and on learning and memory storage in neural nets Carpenter (2001).
4. We also recommend to the interested reader to go back to some selected original literature dealing with “simple recurrent networks” in the context of grammar acquisition (Elman, 990, 2004), with neural networks for time series prediction tasks (Dorffner, 1996), with “learning by error” (Chialvo and Bak, 1999), with the assignment of the cognitive tasks discussed in Sect. 8.3.1 to specific mammal brain areas (Doya, 1999), with the effect on memory storage capacity of various Hebbian-type learning rules (Chechik et al., 2001), with the concept of “associative thought processes” (Gros, 2007, 2009a) and with “diffusive emotional control” (Gros, 2009b).
5. It is very illuminating to take a look at the freely available databases storing human associative knowledge (Nelson et al., 1998; Liu, 2004).