Abstract
In recent years, significant progress has been made in the field of distributed optimization algorithms. This study focused on the distributed convex optimization problem over an undirected network. The target was to minimize the average of all local objective functions known by each agent while each agent communicates necessary information only with its neighbors. Based on the state-of-the-art algorithm, we proposed a novel distributed optimization algorithm, when the objective function of each agent satisfies smoothness and strong convexity. Faster convergence can be attained by utilizing Nesterov and Heavy-ball accelerated methods simultaneously, making the algorithm widely applicable to many large-scale distributed tasks. Meanwhile, the step-sizes and accelerated momentum coefficients are designed as uncoordinate, time-varying, and nonidentical, which can make the algorithm adapt to a wide range of application scenarios. Under some necessary assumptions and conditions, through rigorous theoretical analysis, a linear convergence rate was achieved. Finally, the numerical experiments over a real dataset demonstrate the superiority and efficacy of the novel algorithm compared to similar algorithms.
Funder
National Natural Science Foundation of China
Key Project of Chongqing Science and Technology Bureau
Subject
General Mathematics,Engineering (miscellaneous),Computer Science (miscellaneous)