A Cluster-Driven Adaptive Training Approach for Federated Learning

Author:

Jeong YounghwanORCID,Kim TaeyoonORCID

Abstract

Federated learning (FL) is a promising collaborative learning approach in edge computing, reducing communication costs and addressing the data privacy concerns of traditional cloud-based training. Owing to this, diverse studies have been conducted to distribute FL into industry. However, there still remain the practical issues of FL to be solved (e.g., handling non-IID data and stragglers) for an actual implementation of FL. To address these issues, in this paper, we propose a cluster-driven adaptive training approach (CATA-Fed) to enhance the performance of FL training in a practical environment. CATA-Fed employs adaptive training during the local model updates to enhance the efficiency of training, reducing the waste of time and resources due to the presence of the stragglers and also provides a straggler mitigating scheme, which can reduce the workload of straggling clients. In addition to this, CATA-Fed clusters the clients considering the data size and selects the training participants within a cluster to reduce the magnitude differences of local gradients collected in the global model update under a statistical heterogeneous condition (e.g., non-IID data). During this client selection process, a proportional fair scheduling is employed for securing the data diversity as well as balancing the load of clients. We conduct extensive experiments using three benchmark datasets (MNIST, Fashion-MNIST, and CIFAR-10), and the results show that CATA-Fed outperforms the previous FL schemes (FedAVG, FedProx, and TiFL) with regard to the training speed and test accuracy under the diverse FL conditions.

Publisher

MDPI AG

Subject

Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry

Reference41 articles.

1. Distributed delayed stochastic optimization;Alekh;Adv. Neural Inf. Process Syst.,2011

2. Parameter server for distributed machine learning;Mu;Proceedings of the Big Learning NIPS Workshop,2013

3. Federated optimization: Distributed machine learning for on-device intelligence;Jakub;arXiv,2016

4. Communication-efficient learning of deep networks from decentralized data;McMahan;Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS),2017

5. Advances and open problems in federated learning;Peter;Found. Trends Mach. Learn.,2021

Cited by 6 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. An Efficient Asynchronous Federated Learning Protocol for Edge Devices;IEEE Internet of Things Journal;2024-09-01

2. Federated Learning Survey: A Multi-Level Taxonomy of Aggregation Techniques, Experimental Insights, and Future Frontiers;ACM Transactions on Intelligent Systems and Technology;2024-07-17

3. Communication Efficiency and Non-Independent and Identically Distributed Data Challenge in Federated Learning: A Systematic Mapping Study;Applied Sciences;2024-03-24

4. Federated Learning for Resource Management in Edge Computing;2023 Eleventh International Conference on Intelligent Computing and Information Systems (ICICIS);2023-11-21

5. Can hierarchical client clustering mitigate the data heterogeneity effect in federated learning?;2023 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW);2023-05

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3