Affiliation:
1. School of Information and Artificial Intelligence Anhui Business College Wuhu Anhui China
Abstract
AbstractDistributed base station deployment, limited server resources and dynamically changing end users in mobile edge networks make the design of computing offloading schemes extremely challenging. Considering the advantages of deep reinforcement learning (DRL) in dealing with dynamic complex problems, this paper designs an optimal computing offloading and resource allocation strategy. Firstly, the authors consider a multi‐user mobile edge network scenario consisting of Macro‐cell Base Station (MBS), Small‐cell Base Station (SBS) and multiple terminal devices, the communication overhead and calculation overhead generated are formulated and described in detail. Besides, combined with the deterministic delay of tasks, the optimization objective of this paper is clarified to comprehensively consider system energy consumption. Then, a learning algorithm based on Deep Deterministic Policy Gradient (DDPG) is proposed to minimize system energy consumption. Finally, simulation experiments show that the authors’ proposed DDPG algorithm can effectively optimize the target value, and the total system energy consumption is only 15.6 J, which is better than other compared algorithms. It is also proved that the proposed algorithm has excellent communication resource allocation ability.
Publisher
Institution of Engineering and Technology (IET)
Subject
General Engineering,Energy Engineering and Power Technology,Software
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献