Abstract
AbstractIn 1964, Polyak showed that the Heavy-ball method, the simplest momentum technique, accelerates convergence of strongly-convex problems in the vicinity of the solution. While Nesterov later developed a globally accelerated version, Polyak’s original algorithm remains simpler and more widely used in applications such as deep learning. Despite this popularity, the question of whether Heavy-ball is also globally accelerated or not has not been fully answered yet, and no convincing counterexample has been provided. This is largely due to the difficulty in finding an effective Lyapunov function: indeed, most proofs of Heavy-ball acceleration in the strongly-convex quadratic setting rely on eigenvalue arguments. Our work adopts a different approach: studying momentum through the lens of quadratic invariants of simple harmonic oscillators. By utilizing the modified Hamiltonian of Störmer-Verlet integrators, we are able to construct a Lyapunov function that demonstrates an $$O(1/k^2)$$
O
(
1
/
k
2
)
rate for Heavy-ball in the case of convex quadratic problems. Our novel proof technique, though restricted to linear regression, is found to work well empirically also on non-quadratic convex problems, and thus provides insights on the structure of Lyapunov functions to be used in the general convex case. As such, our paper makes a promising first step towards potentially proving the acceleration of Polyak’s momentum method and we hope it inspires further research around this question.
Funder
Hector Stiftung
Max Planck Institute for Intelligent Systems
Publisher
Springer Science and Business Media LLC
Reference26 articles.
1. Alimisis, F., Orvieto, A., Bécigneul, G., Lucchi, A.: A continuous-time perspective for modeling acceleration in Riemannian optimization. In: International Conference on Artificial Intelligence and Statistics, (2020)
2. Allen-Zhu, Z., Orecchia, L.: Linear coupling: an ultimate unification of gradient and mirror descent. arXiv:1407.1537 (2014)
3. Attouch, H., Chbani, Z., Riahi, H.: Rate of convergence of the nesterov accelerated gradient method in the subcritical case $$\alpha \le$$ 3. Control, Optimisation and Calculus of Variations, ESAIM (2019)
4. Chang, C.-C., Lin, C.-J.: Libsvm: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST) 2(3), 1–27 (2011)
5. Flammarion, N., Bach, F.: From averaging to acceleration, there is only a step-size. In: Conference on Learning Theory, (2015)