Abstract
Natural disasters bring huge loss of life and property to human beings. Unmanned aerial vehicles (UAVs) own the advantages of high mobility, high flexibility, and rapid deployment, and are important equipment during post-disaster rescue. However, UAVs usually have restricted battery and computing power. They are not fit for performing compute-intensive tasks during rescue. Since there are widespread parking resources in a city, multiple parked vehicles working together to compute the applications from UAVs in a post-disaster rescue is investigated to ensure the quality of experience (QoE) of the UAVs. To execute uploaded task effectively, surviving parked vehicles within the monitoring range of an UAV are arranged into a cluster as much as possible. Then, the task execution cost is analyzed. Furthermore, a deep reinforcement learning (DRL)-based offloading policy is constructed, which interacts with the environment in an intelligent way to achieve optimization goals. The simulation experiments show that the proposed offloading scheme has a higher task completion rate and a lower task execution cost than other baselines schemes.
Funder
the Natural Science Foundation of China
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献