Affiliation:
1. College of Computer Science and Technology, Henan Institute of Technology, Xinxiang, Henan 453002, China
2. College of Computer and Information Engineering, Henan Normal University, Xinxiang, Henan 453002, China
Abstract
With the development of technologies such as IoT and 5G, the exponential explosion in the amount of new data has put more stringent requirements on ultrareliable and low-delay communication of services. To better meet these requirements, a resource allocation strategy using deep reinforcement learning in a cloud-edge collaborative computing environment is proposed. First, a collaborative mobile edge computing (MEC) system model, which combines the core cloud center with MEC to improve the network interaction ability, is constructed. The communication model and computation model of the system are considered at the same time. Then, the goal of minimizing system delay is modeled as a Markov decision process, and it is solved by using the deep Q network (DQN) which is improved by hindsight experience replay (HER), so as to realize the resource allocation with the minimum system delay. Finally, the proposed method is analyzed based on the simulation platform. The results show that when the number of user terminals is 80, the maximum user delay is 1150 ms, which is better than other comparison strategies and can effectively reduce the system delay in complex environment.
Subject
Computer Networks and Communications,Computer Science Applications
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献