Abstract
AbstractEfficient search in enormous combinatorial spaces is an essential component of intelligence. Humans, for instance, are often found searching for optimal action sequences, linguistic structures and causal explanations. Is there any computational domain that provides good-enough and fast-enough solutions to such a diverse set of problems, yet can be robustly implemented over neural substrates? Based on previous accounts, we propose that a Darwinian process, operating over sequential cycles of imperfect copying and selection of informational patterns, is a promising candidate. It is, in effect, a stochastic parallel search that i) does not need local gradient-like information and ii) redistributes its computational resources from globally bad to globally good solution candidates automatically. Here we demonstrate these concepts in a proof-of-principle model based on dynamical output states of reservoir computers as units of evolution. We show that a population of reservoir computing units, arranged in one or two-dimensional topologies, is capable of maintaining and continually improving upon existing solutions over rugged combinatorial reward landscapes. We also provide a detailed analysis of how neural quantities, such as noise and topology, translate to evolutionary ones, such as mutation rate and population structure. We demonstrate the existence of a sharp error threshold, a neural noise level beyond which information accumulated by an evolutionary process cannot be maintained. We point at the importance of neural representation, akin to genotype-phenotype maps, in determining the efficiency of any evolutionary search in the brain. Novel analysis methods are developed, including neural firing pattern phylogenies that display the unfolding of the process.
Publisher
Cold Spring Harbor Laboratory