Affiliation:
1. Computer Science, University of Essex, Colchester, England
Abstract
This article presents a scenario where a simple simulated organism must explore and exploit an environment containing a food pile. The organism learns to make observations of the environment, use memory to record those observations, and thus plan and navigate to the regions with the strongest food density. We compare different reinforcement learning algorithms with an adaptive dynamic programming algorithm and conclude that backpropagation through time can convincingly solve this recurrent neural-network challenge. Furthermore, we argue that this algorithm successfully mimics a minimal ‘functionally sentient’ organism’s fundamental objectives and mental environmental-mapping skills while seeking a food pile distributed statically or randomly in an environment.
Funder
Business and Local Government Data Research Centre BLG DRC
ESRC Research Centre on Micro-Social Change
Economic and Social Research Council
Subject
Behavioral Neuroscience,Experimental and Cognitive Psychology