1. Nesterov, Yu.E., Method of minimizing convex functions with convergence
rate $$O(1/k^2)
$$, Dokl. Akad. Nauk
SSSR, 1983, vol. 269, no. 3, pp. 543–547.
2. Lan, G., First-order and Stochastic Optimization
Methods for Machine Learning, Atlanta: Springer, 2020.
3. Gasnikov, A.V., Sovremennye chislennye metody optimizatsii. Metod
universal’nogo gradientnogo spuska (Modern Numerical Optimization Methods.
Universal Gradient Descent Method), Moscow: MTsNMO, 2020.
4. Alkousa, M.S., Dvinskikh, D.M., Stonyakin, F.S., Gasnikov, A.V., and
Kovalev, D., Accelerated methods for saddle point problems, Comput.
Math. Math. Phys., 2020, vol. 60, no. 11, pp. 1787–1809.
5. Gladin, E., Kuruzov, I., Stonyakin, F., Pasechnyuk, D., Alkousa, M., and
Gasnikov, A., Solving Strongly Convex-Concave Composite Saddle Point Problems with a Small
Dimension of One of the Variables.
https://arxiv.org/pdf/2010.02280.pdf .