Affiliation:
1. School of Automation Beijing Institute of Technology Beijing China
2. College of Computer Science & Mathematics University of Kufa Najaf Kufa Iraq
3. Zhongyuan University of Technology Zhengzhou Henan Province China
Abstract
AbstractLarge amounts of data is necessary for deep learning models to semantically segment images. A major issue in the field of medical imaging is accumulating adequate data and then applying specialized skills to label those medical imaging data. Collaboration across institutions might be able to alleviate this problem, but sharing medical data to a centralized place is complicated due to legal, privacy, technical, and data ownership constraints, particularly among international institutions. By guaranteeing user privacy and preventing unauthorized access to raw data, Federated Learning plays a significant role especially in decentralized deep learning applications. Each client is given a unique learning process assignment. Clients first train a machine learning model locally using data from their area. Then, clients upload training data (local updates of model weights and biases) to a server. After that, the server compiles client‐provided updates to build a global learning model. Due to the numerous parameters (weights and biases) employed by deep learning models, the constant transmission between clients and the server raises communication costs and is inefficient from the standpoint of resource use. When there are more contributing clients and communication rounds, the cost of communication becomes a bigger concern. In this paper, a novel federated learning with weight sharing optimization compression architecture FedWSOcomp is proposed for cross institutional collaboration. In FedWSOcomp, the weights from deep learning models between clients and servers help in considerably reducing the amount of updates. Top‐z sparsification, quantization with clustering, and compression with three separate encoding techniques are all implemented by FedWSOcomp. Modern compression techniques are outperformed by FedWSOcomp, which achieves compression rates of up to 1085× while saving up to 99% of bandwidth and 99% of energy for clients during communication.
Funder
National Natural Science Foundation of China
Reference69 articles.
1. Abdi A. &Fekri F.(2019).Nested dithered quantization for communication reduction in distributed training.arXiv Preprint arXiv:1904.01197 16 A PREPRINT – FEBRUARY 3 2021.
2. Aji A. &Heafield K.(2017).Sparse communication for distributed gradient descent.arXiv Preprint arXiv:1704.05021.
3. Imaging Surrogates of Infiltration Obtained Via Multiparametric Imaging Pattern Analysis Predict Subsequent Location of Recurrence of Glioblastoma
4. Imagenet classification with deep convolutional, neural networks;Alex K.;Advances in Neural Information Processing Systems,2012