Affiliation:
1. College of Computer and Information Hohai University Nanjing China
2. NARI Technology Co. Ltd. Nanjing China
3. School of Communications and Information Engineering University of Electronic Science and Technology of China Chengdu Sichuan China
4. School of Computer Science and Engineering Nanyang Technological University Singapore
5. College of Internet of Things Nanjing University of Posts and Telecommunications Nanjing China
Abstract
AbstractIn the era of sixth generation mobile networks (6G), industrial big data is rapidly generated due to the increasing data‐driven applications in the Industrial Internet of Things (IIoT). Effectively processing such data, for example, knowledge learning, on resource‐limited IIoT devices becomes a challenge. To this end, we introduce a cloud‐edge‐end collaboration architecture, in which computing, communication, and storage resources are flexibly coordinated to alleviate the issue of resource constraints. To achieve better performance in hyper‐connected experience, real‐time communication, and sustainable computing, we construct a novel architecture combining digital twin (DT)‐IIoT with edge networks. In addition, considering the energy consumption and delay issues in distributed learning, we propose a deep reinforcement learning‐based method called deep deterministic policy gradient with double actors and double critics (D4PG) to manage the multi‐dimensional resources, that is, CPU cycles, DT models, and communication bandwidths, enhancing the exploration ability and improving the inaccurate value estimation of agents in continuous action spaces. In addition, we introduce a synchronization threshold for distributed learning framework to avoid the synchronization latency caused by stragglers. Extensive experimental results prove that the proposed architecture can efficiently conduct knowledge learning, and the intelligent scheme can also improve system efficiency by managing multi‐dimensional resources.
Funder
National Natural Science Foundation of China