Abstract
Intuitively, the level of autonomy of an agent is related to the degree to which the agent’s goals and behaviour are decoupled from the immediate control by the environment. Here, we capitalise on a recent information-theoretic formulation of autonomy and introduce an algorithm for calculating autonomy in a limiting process of time step approaching infinity. We tackle the question of how the autonomy level of an agent changes during training. In particular, in this work, we use the partial information decomposition (PID) framework to monitor the levels of autonomy and environment internalisation of reinforcement-learning (RL) agents. We performed experiments on two environments: a grid world, in which the agent has to collect food, and a repeating-pattern environment, in which the agent has to learn to imitate a sequence of actions by memorising the sequence. PID also allows us to answer how much the agent relies on its internal memory (versus how much it relies on the observations) when transitioning to its next internal state. The experiments show that specific terms of PID strongly correlate with the obtained reward and with the agent’s behaviour against perturbations in the observations.
Funder
Estonian Centre of Excellence in IT
Subject
General Physics and Astronomy
Reference43 articles.
1. Reinforcement Learning: An Introduction;Sutton,2018
2. Emergent Tool Use From Multi-Agent Autocurricula;Baker;arXiv,2019
3. Grandmaster level in StarCraft II using multi-agent reinforcement learning
4. Dota 2 with Large Scale Deep Reinforcement Learning;Berner;arXiv,2019
5. Open-Ended Learning Leads to Generally Capable Agents;Stooke;arXiv,2021
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献