Author:
LI JUEYOU,WU CHANGZHI,WU ZHIYOU,LONG QIANG,WANG XIANGYU
Abstract
AbstractWe consider a distributed optimization problem over a multi-agent network, in which
the sum of several local convex objective functions is minimized subject to global
convex inequality constraints. We first transform the constrained optimization
problem to an unconstrained one, using the exact penalty function method. Our
transformed problem has a smaller number of variables and a simpler structure than
the existing distributed primal–dual subgradient methods for constrained
distributed optimization problems. Using the special structure of this problem, we
then propose a distributed proximal-gradient algorithm over a time-changing
connectivity network, and establish a convergence rate depending on the number of
iterations, the network topology and the number of agents. Although the transformed
problem is nonsmooth by nature, our method can still achieve a convergence rate, ${\mathcal{O}}(1/k)$, after $k$ iterations, which is faster than the rate, ${\mathcal{O}}(1/\sqrt{k})$, of existing distributed subgradient-based methods. Simulation
experiments on a distributed state estimation problem illustrate the excellent
performance of our proposed method.
Publisher
Cambridge University Press (CUP)
Subject
Mathematics (miscellaneous)
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献