1. A stochastic approximation method;Robbins;Ann. Math. Stat.,1951
2. Large-scale online learning;Bottou;Adv. Neural Inf. Process. Syst.,2004
3. Defazio, A., Bach, F., and Lacoste-Julien, S. (2014, January 8–13). SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada.
4. Accelerating stochastic gradient descent using predictive variance reduction;Johnson;Adv. Neural Inf. Process. Syst.,2013
5. Minimizing finite sums with the stochastic average gradient;Schmidt;Math. Program.,2017