Abstract
AbstractWe propose a random coordinate descent algorithm for optimizing a non-convex objective function subject to one linear constraint and simple bounds on the variables. Although it is common use to update only two random coordinates simultaneously in each iteration of a coordinate descent algorithm, our algorithm allows updating arbitrary number of coordinates. We provide a proof of convergence of the algorithm. The convergence rate of the algorithm improves when we update more coordinates per iteration. Numerical experiments on large scale instances of different optimization problems show the benefit of updating many coordinates simultaneously.
Publisher
Springer Science and Business Media LLC
Reference31 articles.
1. Wenjiang, J.F.: Penalized regressions: the bridge versus the lasso. J. Comput. Graph. Stat. 7(3), 397–416 (1998)
2. Shi, H.-J.M., Tu, S., Xu, Y., Yin, W.: A primer on coordinate descent algorithms. arXiv preprint arXiv:1610.00040, (2016)
3. Wright, S.J.: Coordinate descent algorithms. Math. Program. 151(1), 3–34 (2015)
4. McGaffin, M.G., Fessler, J.A.: Edge-preserving image denoising via group coordinate descent on the GPU. IEEE Trans. Image Process. 24(4), 1273–1281 (2015)
5. Nishijima, M., Nakata, K.: A block coordinate descent method for sensor network localization. Optim. Lett. 16, 1051–1071 (2022)