Affiliation:
1. The U.S. Securities and Exchange Commission
2. KX
3. InQTel
Abstract
Abstract
We consider an extended notion of reinforcement learning in which the environment can simulate the agent and base its outputs on the agent’s hypothetical behavior. Since good performance usually requires paying attention to whatever things the environment’s outputs are based on, we argue that for an agent to achieve on-average good performance across many such extended environments, it is necessary for the agent to self-reflect. Thus weighted-average performance over the space of all suitably well-behaved extended environments could be considered a way of measuring how self-reflective an agent is. We give examples of extended environments and introduce a simple transformation which experimentally seems to increase some standard RL agents’ performance in a certain type of extended environment.
Reference26 articles.
1. Alexander, S. A., and Hutter, M. 2021. Reward-Punishment Symmetric Universal Intelligence. In CAGI.10.1007/978-3-030-93758-4_1
2. Alexander, S. A., and Pedersen, A. P. 2022. Pseudo-visibility: A Game Mechanic Involving Willful Ignorance. In FLAIRS.10.32473/flairs.v35i.130652
3. Alexander, S. A.; Castaneda, M.; Compher, K.; and Martinez, O. 2022. Extended Environments. https://github.com/semitrivial/ExtendedEnvironments.
4. Alexander, S. A. 2022. Extended subdomains: a solution to a problem of Hernández-Orallo and Dowe. Preprint (accepted to CAGI-22).10.1007/978-3-031-19907-3_14
5. Bell, J. H.; Linsefors, L.; Oesterheld, C.; and Skalse, J. 2021. Reinforcement Learning in Newcomblike Environments. In NeurIPS.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献