Abstract
<div>We present a decentralized advantage actor-critic algorithm that utilizes learning agents in parallel environments with synchronous gradient descent. This approach decorrelates agents’ experiences, stabilizing observations and eliminating the need for a replay buffer, requires no knowledge of the other agents’ internal state during training or execution, and runs on a single multi-core CPU.</div>
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献