Abstract
The optimization of cognitive and learning mechanisms can reveal complicated behavioral phenomena. In this study, we focused on reinforcement learning, which uses different learning rules for positive and negative reward prediction errors. We attempted to relate the evolved learning bias to the complex features of risk preference such as domain-specific behavior manifests and the relatively stable domain-general factor underlying behaviors. The simulations of the evolution of the two learning rates under diverse risky environments showed that the positive learning rate evolved on average to be higher than the negative one, when agents experienced both tasks where risk aversion was more rewarding and risk seeking was more rewarding. This evolution enabled agents to flexibly choose more reward behaviors depending on the task type. The evolved agents also demonstrated behavioral patterns described by the prospect theory. Our simulations captured two aspects of the evolution of risk preference: the domain-specific aspect, behavior acquired through learning in a specific context; and the implicit domain-general aspect, corresponding to the learning rates shaped through evolution to adaptively behave in a wide range of environments. These results imply that our framework of learning under the innate constraint may be useful in understanding the complicated behavioral phenomena.
Funder
Japan Society for the Promotion of Science
Publisher
Public Library of Science (PLoS)