Affiliation:
1. School of Information and Control Engineering, Jilin Institute of Chemical Technology, Jilin, China
2. School of Mechanical and Control Engineering, Baicheng Normal University, Baicheng, Jilin, China
Abstract
In cloud computing, task scheduling is a critical process that involves efficiently allocating computing resources to fulfill diverse task requirements. To address issues such as unstable response times, extensive computations, and challenges in parameter adjustment faced by traditional task scheduling methods, an enhanced deep Q-learning cloud-task-scheduling algorithm was proposed. This algorithm utilizes deep reinforcement learning and introduces an improved strategy. The optimization of the objective function was achieved by defining the state space, action space, and reward function. The agent’s exploration capability was enhanced through the utilization of a UCB exploration strategy and Boltzmann action exploration. Simulation experiments were conducted using Pycloudsim. The average instruction response time ratio and standard deviation of CPU utilization were compared to measure the advantages and disadvantages of the algorithm. The results indicate that the proposed algorithm surpasses the random, earliest, and RR algorithms in terms of the instruction-to-response time ratio and CPU utilization, demonstrating enhanced efficiency and performance in cloud-task scheduling.