Author:
Liu Bing,Xu Bowen,He Tong,Yu Wei,Guo Fanghong
Abstract
The increasing number and functional complexity of power electronics in more electric aircraft (MEA) power systems have led to a high degree of complexity in modelling and computation, making real-time energy management a formidable challenge, and the discrete-continuous action space of the MEA system under consideration also poses a challenge to existing DRL algorithms. Therefore, this paper proposes an optimisation strategy for real-time energy management based on hybrid deep reinforcement learning (HDRL). An energy management model of the MEA power system is constructed for the analysis of generators, buses, loads and energy storage system (ESS) characteristics, and the problem is described as a multi-objective optimisation problem with integer and continuous variables. The problem is solved by combining a duelling double deep Q network (D3QN) algorithm with a deep deterministic policy gradient (DDPG) algorithm, where the D3QN algorithm deals with the discrete action space and the DDPG algorithm with the continuous action space. These two algorithms are alternately trained and interact with each other to maximize the long-term payoff of MEA. Finally, the simulation results show that the effectiveness of the method is verified under different generator operating conditions. For different time lengths T, the method always obtains smaller objective function values compared to previous DRL algorithms, is several orders of magnitude faster than commercial solvers, and is always less than 0.2 s, despite a slight shortfall in solution accuracy. In addition, the method has been validated on a hardware-in-the-loop simulation platform.
Subject
Energy (miscellaneous),Energy Engineering and Power Technology,Renewable Energy, Sustainability and the Environment,Electrical and Electronic Engineering,Control and Optimization,Engineering (miscellaneous),Building and Construction
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献