Gradient-free Federated Learning Methods with l1 and l2-randomization for Non-smooth Convex Stochastic Optimization Problems

Author:

Alashqar B. A.1,Gasnikov A. V.234,Dvinskikh D. M.5,Lobanov A. V.167

Affiliation:

1. Moscow Institute of Physics and Technology, Dolgoprudny, Russia

2. Moscow Institute of Physics and Technology

3. Institute for Information Transmission Problems of the RAS (Kharkevich Institute)

4. Caucasian Mathematical Center of the Adyghe State University

5. National Research University Higher School of Economics, Moscow, Russia

6. ISP RAS Research Center for Trusted Artificial Intelligence, Moscow, Russia

7. Moscow Aviation Institute, Moscow, Russia

Abstract

This paper studies non-smooth problems of convex stochastic optimization. Using the smoothing technique based on the replacement of the function value at the considered point by the averaged function value over a ball (in l1-norm or l2-norm) of a small radius centered at this point, and then the original problem is reduced to a smooth problem (whose Lipschitz constant of the gradient is inversely proportional to the radius of the ball). An essential property of the smoothing used is the possibility of calculating an unbiased estimation of the gradient of a smoothed function based only on realizations of the original function. The obtained smooth stochastic optimization problem is proposed to be solved in a distributed federated learning architecture (the problem is solved in parallel: nodes make local steps, e.g. stochastic gradient descent, then communicate—all with all, then all this is repeated). The goal of the article is to build on the basis of modern achievements in the field of gradient–free non-smooth optimization and in the field of federated learning gradient-free methods for solving problems of non-smooth stochastic optimization in the architecture of federated learning.

Publisher

The Russian Academy of Sciences

Reference22 articles.

1. The power of first-order smooth optimization for black-box non-smooth problems / A. Gasnikov et al. // arXiv preprint arXiv:2201.12289. 2022.

2. Немировский А.С., Юдин Д.Б. Сложность задач и эффективность методов оптимизации. 1979.

3. Shamir O. An optimal algorithm for bandit and zero-order convex optimization with twopoint feedback // The Journal of Machine Learning Research. 2017. T. 18. № 1. C. 1703–1713.

4. A gradient estimator via L1-randomization for online zero-order optimization with two point feedback / A. Akhavan et al. // arXiv preprint arXiv:2205.13910. 2022.

5. Gradient-free proximal methods with inexact oracle for convex stochastic nonsmooth optimization problems on the simplex / A.V. Gasnikov et al. // Automation and Remote Control. 2016. T. 77. № 11. C. 2018–2034.

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Randomized Gradient-Free Methods in Convex Optimization;Encyclopedia of Optimization;2023-09-08

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3