Affiliation:
1. College of Command and Control Engineering, Army Engineering University of PLA , Nanjing, China
Abstract
Abstract
Value factorization is a popular method for cooperative multi-agent deep reinforcement learning. In this method, agents generally have the same ability and rely only on individual value function to select actions, which is calculated from total environment reward. It ignores the impact of individual characteristics of heterogeneous agents on actions selection, which leads to the lack of pertinence during training and the increase of difficulty in learning effective policies. In order to stimulate individual awareness of heterogeneous agents and improve their learning efficiency and stability, we propose a novel value factorization method based on Personality Characteristics, PCQMIX, which assigns personality characteristics to each agent and takes them as internal rewards to train agents. As a result, PCQMIX can generate heterogeneous agents with specific personality characteristics suitable for specific scenarios. Experiments show that PCQMIX generates agents with stable personality characteristics and outperforms all baselines in multiple scenarios of the StarCraft II micromanagement task.
Publisher
Oxford University Press (OUP)
Reference26 articles.
1. Tactical uav path optimization under radar threat using deep reinforcement learning;Alpdemir;Neural Comput. Applic.
2. Design and analysis of self-adapted task scheduling strategies in wireless sensor networks;Guo;Sensors
3. Connected vehicle as a mobile sensor for real time queue length at signalized intersections;Gao;Sensors
4. Deep reinforcement learning for multiagent systems: A review of challenges, solutions, and applications;Nguyen;IEEE transactions on cybernetics
5. Value-decomposition networks for cooperative multi-agent learning based on team reward;Sunehag;Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems