Affiliation:
1. Dalian University of Technology, Dalian, China
2. National Institute of Telecommunications (Inatel), Santa Rita do Sapucaí--MG, Brazil; Instituto de Telecomunicações, Portugal; Federal University of Piauí, Teresina--PI, Brazil
Abstract
The development of smart vehicles brings drivers and passengers a comfortable and safe environment. Various emerging applications are promising to enrich users’ traveling experiences and daily life. However, how to execute computing-intensive applications on resource-constrained vehicles still faces huge challenges. In this article, we construct an intelligent offloading system for vehicular edge computing by leveraging deep reinforcement learning. First, both the communication and computation states are modelled by finite Markov chains. Moreover, the task scheduling and resource allocation strategy is formulated as a joint optimization problem to maximize users’ Quality of Experience (QoE). Due to its complexity, the original problem is further divided into two sub-optimization problems. A two-sided matching scheme and a deep reinforcement learning approach are developed to schedule offloading requests and allocate network resources, respectively. Performance evaluations illustrate the effectiveness and superiority of our constructed system.
Funder
State Key Laboratory of Integrated Services Networks, Xidian University
National Natural Science Foundation of China
China Postdoctoral Science Foundation
State Key Laboratory for Novel Software Technology, Nanjing University
Dalian Science and Technology Innovation Fund
National Natural Science Foundation of Chongqing
Brazilian National Council for Research and Development
National Funding from the FCT—Fundação para a Ciência e a Tecnologia
RNP, with resources from MCTIC
Centro de Referência em Radiocomunicações—CRR project of the Instituto Nacional de Telecomunicações (Inatel), Brazil
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Theoretical Computer Science
Cited by
241 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献