Affiliation:
1. 1 Department of Electrical and Electronics Engineering , Ege University , Izmir , , Turkey
Abstract
Abstract
With the expansion of the communicative and perceptual capabilities of mobile devices in recent years, the number of complex and high computational applications has also increased rendering traditional methods of traffic management and resource allocation quite insufficient. Recently, mobile edge computing (MEC) has emerged as a new viable solution to these problems. It can provide additional computing features at the edge of the network and allow alleviation of the resource limit of mobile devices while increasing the performance for critical applications especially in terms of latency. In this work, we addressed the issue of reducing the service delay by choosing the optimal path in the MEC network, which consists of multiple MEC servers that has different capabilities, applying network load balancing where multiple requests need to be handled simultaneously and routing selection based on a deep- Q network (DQN) algorithm. A novel traffic control and resource allocation method is proposed based on deep Q-learning (DQL) which allows reducing the end-to-end delay in cellular networks and in the mobile edge network. Real life traffic scenarios with various types of user requests are considered and a novel DQL resource allocation scheme which adaptively assigns computing and network resources is proposed. The algorithm optimizes traffic distribution between servers reducing the total service time and balancing the use of available resources under varying environmental conditions.
Reference16 articles.
1. [1] T. Yang, Y. Hu, M. C. Gursoy, A. Schmeink and R. Mathar, “Deep reinforcement learning based resource allocation in low latency edge computing networks”, 15th International Symposium on Wireless Communication Systems (ISWCS), pp. 1-5, 2018.10.1109/ISWCS.2018.8491089
2. [2] F. A. N. Qi, L. Zhuo and C. Xin, C, “Deep Reinforcement Learning Based Task Scheduling in Edge Computing Networks”, IEEE/CIC International Conference on Communications in China (ICCC), pp. 835-840, 2020.
3. [3] G. S. Rahman, T. Dang and M. Ahmed, “Deep reinforcement learning based computation offloading and resource allocation for low-latency fog radio access networks”, Intelligent and Converged Networks, vol. 1, no. 3, pp. 243-257, 2020.10.23919/ICN.2020.0020
4. [4] Y. Yang, Y. Hu and M. C. Gursoy, “Deep Reinforcement Learning and Optimization Based Green Mobile Edge Computing”, IEEE 18th Annual Consumer Communications & Networking Conference, (CCNC) pp. 1-2, IEEE, 2021.10.1109/CCNC49032.2021.9369566
5. [5] J. Wang, L. Zhao, J. Liu, and N. Kato, “Smart resource allocation for mobile edge computing: A deep reinforcement learning approach”, IEEE Transactions on emerging topics in computing, 2019.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献