1. Li, B., Cen, S., Chen, Y., Chi, Y.: Communication-efficient distributed optimization in networks with gradient tracking and variance reduction. In: International Conference on Artificial Intelligence and Statistics, pp. 1662–1672. PMLR (2020)
2. Ye, H., Zhou, Z., Luo, L., Zhang, T.: Decentralized accelerated proximal gradient descent. Adv. Neural. Inf. Process. Syst. 33, 18308 (2020)
3. Yang, T., Lin, Q.: Rsg: beating subgradient method without smoothness and strong convexity. J. Mach. Learn. Res. 19(1), 236–268 (2018)
4. Liu, M., Yang, T.: Adaptive accelerated gradient converging method under Hölderian error bound condition. In: Advances in Neural Information Processing Systems, pp. 3104–3114 (2017)
5. Xu, Y., Lin, Q., Yang, T.: Adaptive SVRG methods under error bound conditions with unknown growth parameter. In: Advances in Neural Information Processing Systems, pp. 3277–3287 (2017)