Affiliation:
1. School of Cyber Science and Engineering Wuhan University Wuhan China
2. College of Mechanical Engineering Tongji University Shanghai China
Abstract
AbstractIn the last few years, the rapid advancement of the Internet of Things (IoT) and the widespread adoption of smart cities have posed new challenges to computing services. Traditional cloud computing models fail to fulfil the rapid response requirement of latency‐sensitive applications, while mobile edge computing (MEC) improves service efficiency and customer experience by transferring computing tasks to servers located at the network edge. However, designing an effective computing offloading strategy in complex scenarios involving multiple computing tasks, nodes, and services remains a pressing issue. In this paper, a computing offloading approach based on Deep Reinforcement Learning (DRL) is proposed for large‐scale heterogeneous computing tasks. First, Markov Decision Processes (MDPs) is used to formulate computing offloading decision and resource allocation problems in large‐scale heterogeneous MEC systems. Subsequently, a comprehensive framework comprising the "end‐edge‐cloud" along with the corresponding time‐overhead and resource allocation models is constructed. Finally, through extensive experiments on real datasets, the proposed approach is demonstrated to outperform existing methods in enhancing service response speed, reducing latency, balancing server loads, and saving energy.