Abstract
Abstract
Stochastic variance reduced gradient (SVRG) is a popular variance reduction technique for accelerating stochastic gradient descent (SGD). We provide a first analysis of the method for solving a class of linear inverse problems in the lens of the classical regularization theory. We prove that for a suitable constant step size schedule, the method can achieve an optimal convergence rate in terms of the noise level (under suitable regularity condition) and the variance of the SVRG iterate error is smaller than that by SGD. These theoretical findings are corroborated by a set of numerical experiments.
Funder
Hong Kong RGC General Research Fund
UK EPSRC
Subject
Applied Mathematics,Computer Science Applications,Mathematical Physics,Signal Processing,Theoretical Computer Science
Reference36 articles.
1. Variance reduction for faster non-convex optimization;Allen-Zhu,2016
2. Improved SVRG for non-strongly-convex or sum-of-non-convex objectives;Allen-Zhu,2016
3. Optimization methods for large-scale machine learning;Bottou;SIAM Rev.,2018
4. Online learning in optical tomography: a stochastic approach;Chen;Inverse Problems,2018
5. SAGA: a fast incremental gradient method with support for non-strongly convex composite objectives;Defazio,2014
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献