Federated Learning with Pareto Optimality for Resource Efficiency and Fast Model Convergence in Mobile Environments
Author:
Jung June-Pyo1ORCID, Ko Young-Bae1ORCID, Lim Sung-Hwa2ORCID
Affiliation:
1. Department of AI Convergence Network, Ajou Univeristy, 206, World Cup-ro, Suwon-si 16499, Republic of Korea 2. Department of Multimedia, Namseoul University, 91, Daehak-ro, Cheonan-si 31020, Republic of Korea
Abstract
Federated learning (FL) is an emerging distributed learning technique through which models can be trained using the data collected by user devices in resource-constrained situations while protecting user privacy. However, FL has three main limitations: First, the parameter server (PS), which aggregates the local models that are trained using local user data, is typically far from users. The large distance may burden the path links between the PS and local nodes, thereby increasing the consumption of the network and computing resources. Second, user device resources are limited, but this aspect is not considered in the training of the local model and transmission of the model parameters. Third, the PS-side links tend to become highly loaded as the number of participating clients increases. The links become congested owing to the large size of model parameters. In this study, we propose a resource-efficient FL scheme. We follow the Pareto optimality concept with the biased client selection to limit client participation, thereby ensuring efficient resource consumption and rapid model convergence. In addition, we propose a hierarchical structure with location-based clustering for device-to-device communication using k-means clustering. Simulation results show that with prate at 0.75, the proposed scheme effectively reduced transmitted and received network traffic by 75.89% and 78.77%, respectively, compared to the FedAvg method. It also achieves faster model convergence compared to other FL mechanisms, such as FedAvg and D2D-FedAvg.
Funder
National Research Foundation of Korea
Reference32 articles.
1. Taylor, R., Baron, D., and Schmidt, D. (2015, January 21–23). The world in 2025-predictions for the next ten years. Proceedings of the 2015 10th International Microsystems, Packaging, Assembly and Circuits Technology Conference (IMPACT), Taipei, Taiwan. 2. Fog and IoT: An overview of research opportunities;Chiang;IEEE Internet Things J.,2016 3. Privacy and big data;Gaff;Computer,2014 4. Wang, S., Tuor, T., Salonidis, T., Leung, K.K., Makaya, C., He, T., and Chan, K. (2018, January 16–19). When edge meets learning: Adaptive control for resource-constrained distributed machine learning. Proceedings of the IEEE INFOCOM 2018-IEEE Conference on Computer Communications, Honolulu, HI, USA. 5. Konečnỳ, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., and Bacon, D. (2016). Federated learning: Strategies for improving communication efficiency. arXiv.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|