Affiliation:
1. Institute of Computer Science, Foundation for Research and Technology Hellas, 70013 Heraklion, Greece
2. Department of Computer Science, University of Cyprus, 1678 Nicosia, Cyprus
3. Department of Computer Science, School of Sciences and Engineering, University of Nicosia, 2417 Nicosia, Cyprus
Abstract
Federated learning (FL) is a transformative approach to Machine Learning that enables the training of a shared model without transferring private data to a central location. This decentralized training paradigm has found particular applicability in edge computing, where IoT devices and edge nodes often possess limited computational power, network bandwidth, and energy resources. While various techniques have been developed to optimize the FL training process, an important question remains unanswered: how should resources be allocated in the training workflow? To address this question, it is crucial to understand the nature of these resources. In physical environments, the allocation is typically performed at the node level, with the entire node dedicated to executing a single workload. In contrast, virtualized environments allow for the dynamic partitioning of a node into containerized units that can adapt to changing workloads. Consequently, the new question that arises is: how can a physical node be partitioned into virtual resources to maximize the efficiency of the FL process? To answer this, we investigate various resource allocation methods that consider factors such as computational and network capabilities, the complexity of datasets, as well as the specific characteristics of the FL workflow and ML backend. We explore two scenarios: (i) running FL over a finite number of testbed nodes and (ii) hosting multiple parallel FL workflows on the same set of testbed nodes. Our findings reveal that the default configurations of state-of-the-art cloud orchestrators are sub-optimal when orchestrating FL workflows. Additionally, we demonstrate that different libraries and ML models exhibit diverse computational footprints. Building upon these insights, we discuss methods to mitigate computational interferences and enhance the overall performance of the FL pipeline execution.
Subject
Computer Networks and Communications
Reference55 articles.
1. McMahan, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B.A. (2017, January 20–22). Communication-efficient learning of deep networks from decentralized data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) 2017, Fort Lauderdale, FL, USA.
2. A survey on federated learning;Zhang;Knowl. Based Syst.,2021
3. A review of applications in federated learning;Li;Comput. Ind. Eng.,2020
4. (2016). The European General Data Protection Regulation (EU 2016/67). Off. J. Eur. Union, L 119, 1–88.
5. Privacy preservation in federated learning: An insightful survey from the GDPR perspective;Truong;Comput. Secur.,2021
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献