Affiliation:
1. University of Wollongong
2. Japan Advanced Institute of Science and Technology
3. Nagoya Institute of Technology
Abstract
Abstract
Due to its exceptional learning ability, multi-agent deep reinforcement learning (MADRL) has garnered widespread research interest. However, since the learning is data-driven and involves sampling from millions of steps, training a large number of agents is inherently challenging and inefficient. Inspired by the human learning process, we aim to transfer knowledge from humans to avoid starting from scratch. Given the growing emphasis on the Human-on-the-Loop concept, this study focuses on addressing the challenges of large-population learning by incorporating suboptimal human knowledge into the cooperative multi-agent environment. To leverage human experience, we integrate human knowledge into the training process of MADRL, representing it in natural language rather than specific action-state pairs. Compared to previous works, we further consider the attributes of transferred knowledge to assess its impact on algorithm scalability. Additionally, we examine several features of knowledge mapping to effectively convert human knowledge to the action space where agent learning occurs. In reaction to the disparity in knowledge construction between humans and agents, our approach allows agents to decide freely which portions of the state space to leverage human knowledge. From the challenging domains of the StarCraft Multi-agent Challenge, our method successfully alleviates the scalability issue in MADRL. Furthermore, we find that, despite individual-type knowledge significantly accelerating the training process, cooperative-type knowledge is more desirable for addressing a large agent population. We hope this study provides valuable insights into applying and mapping human knowledge, ultimately enhancing the interpretability of agent behavior.
Publisher
Research Square Platform LLC