Affiliation:
1. Karlsruhe Institute of Technology, Germany
2. Munich Research Center Huawei Technologies, Germany
Abstract
With an increasing number of smart devices like internet of things devices deployed in the field, offloading training of neural networks (NNs) to a central server becomes more and more infeasible. Recent efforts to improve users’ privacy have led to on-device learning emerging as an alternative. However, a model trained only on a single device, using only local data, is unlikely to reach a high accuracy. Federated learning (FL) has been introduced as a solution, offering a privacy-preserving tradeoff between communication overhead and model accuracy by sharing knowledge between devices but disclosing the devices’ private data. The applicability and the benefit of applying baseline FL are, however, limited in many relevant use cases due to the heterogeneity present in such environments. In this survey, we outline the heterogeneity challenges FL has to overcome to be widely applicable in real-world applications. We especially focus on the aspect of computation heterogeneity among the participating devices and provide a comprehensive overview of recent works on heterogeneity-aware FL. We discuss two groups: works that adapt the NN architecture and works that approach heterogeneity on a system level, covering Federated Averaging, distillation, and split learning–based approaches, as well as synchronous and asynchronous aggregation schemes.
Funder
Deutsches Bundesministerium für Bildung und Forschung (BMBF, Federal Ministry of Education and Research in Germany
Publisher
Association for Computing Machinery (ACM)
Subject
General Computer Science,Theoretical Computer Science
Reference108 articles.
1. A survey on federated learning: The journey from centralized to distributed on-site learning and beyond;Abdulrahman Sawsan;IEEE IoT J.,2020
2. Samiul Alam, Luyang Liu, Ming Yan, and Mi Zhang. 2022. FedRolex: Model-heterogeneous federated learning with rolling sub-model extraction. In Advances in Neural Information Processing Systems, Vol. 35. Curran Associates, Red Hook, NY, 158–171.
3. Federated Learning Over Wireless Fading Channels
4. Dario Amodei Danny Hernandez Girish Sastry Jack Clark Greg Brockman and Ilya Sutskever. 2018. AI and Compute. Retrieved from https://openai.com/blog/ai-and-compute/.
5. Hardware Approximate Techniques for Deep Neural Network Accelerators: A Survey
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献