Affiliation:
1. College of Computer Science and Technology, Zhejiang Normal University, Jinhua 341000, China
2. College of Engineering, Qatar University, Doha 974, Qatar
Abstract
Federated Learning (FL), as an emerging paradigm in distributed machine learning, has received extensive research attention. However, few works consider the impact of device mobility on the learning efficiency of FL. In fact, it is detrimental to the training result if heterogeneous clients undergo migration or are in an offline state during the global aggregation process. To address this issue, the Optimal Global Aggregation strategy (OGAs) is proposed. The OGAs first models the interaction between clients and servers of the FL as a Markov Decision Process (MDP) model, which jointly considers device mobility and data heterogeneity to determine local participants that are conducive to global aggregation. To obtain the optimal client participation strategy, an improved σ-value iteration method is utilized to solve the MDP, ensuring that the number of participating clients is maintained within an optimal interval in each global round. Furthermore, the Principal Component Analysis (PCA) is used to reduce the dimensionality of the original features to deal with the complex state space in the MDP. The experimental results demonstrate that, compared with other existing aggregation strategies, the OGAs has the faster convergence speed and the higher training accuracy, which significantly improves the learning efficiency of the FL.
Funder
Zhejiang Normal University
Reference30 articles.
1. Federated learning for the internet of things: Applications, challenges, and opportunities;Zhang;IEEE Internet Things Mag.,2022
2. Federated learning in mobile edge networks: A comprehensive survey;Lim;IEEE Commun. Surv. Tutor.,2020
3. Big data privacy: A technological perspective and review;Jain;J. Big Data,2016
4. McMahan, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B.A. (2017, January 20–22). Communication-efficient learning of deep networks from decentralized data. Proceedings of the 20th Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA.
5. Wang, Z., Xu, H., Liu, J., Huang, H., Qiao, C., and Zhao, Y. (2021, January 10–13). Resource-efficient federated learning with hierarchical aggregation in edge computing. Proceedings of the IEEE INFOCOM 2021—IEEE Conference on Computer Communications, Vancouver, BC, Canada.