Abstract
AbstractTrade-offs between moving to achieve goals and perceiving the surrounding environment highlight the complexity of continually adapting behaviors. The need to switch between goal-directed and sensory-focused modes, along with the goal emergence phenomenon, challenges conventional optimization frameworks, necessitating heuristic solutions. In this study, we propose a Bayesian recurrent neural network framework for homeostatic behavior adaptation via hierarchical multimodal integration. In it, the meta-goal of “minimizing predicted future sensory entropy” underpins the dynamic self-organization of future sensorimotor goals and their precision regarding the increasing sensory uncertainty due to unusual physiological conditions. We demonstrated that after learning a hierarchical predictive model of a dynamic environment through random exploration, our Bayesian agent autonomously switched self-organized behavior between goal-directed feeding and sensory-focused resting. It increased feeding before anticipated food shortages, explaining predictive energy regulation (allostasis) in animals. Our modeling framework opens new avenues for studying brain information processing and anchoring continual behavioral adaptations.
Publisher
Cold Spring Harbor Laboratory