Abstract
AbstractFederated learning (FL) enables participants to collaboratively train machine and deep learning models while safeguarding data privacy. However, the FL paradigm still has drawbacks that affect its trustworthiness, as malicious participants could launch adversarial attacks against the training process. Previous research has examined the robustness of horizontal FL scenarios under various attacks. However, there is a lack of research evaluating the robustness of decentralized vertical FL and comparing it with horizontal FL architectures affected by adversarial attacks. Therefore, this study proposes three decentralized FL architectures: HoriChain, VertiChain, and VertiComb. These architectures feature different neural networks and training protocols suitable for horizontal and vertical scenarios. Subsequently, a decentralized, privacy-preserving, and federated use case with non-IID data to classify handwritten digits is deployed to assess the performance of the three architectures. Finally, a series of experiments computes and compares the robustness of the proposed architectures when they are affected by different data poisoning methods, including image watermarks and gradient poisoning adversarial attacks. The experiments demonstrate that while specific configurations of both attacks can undermine the classification performance of the architectures, HoriChain is the most robust one.
Publisher
Springer Science and Business Media LLC
Reference36 articles.
1. McMahan HB, Moore E, Ramage D, Hampson S, Arcas BAy (2016) Communication-efficient learning of deep networks from decentralized data. arXiv. https://doi.org/10.48550/ARXIV.1602.05629
2. Wei S, Tong Y, Zhou Z, Song T (2020) Efficient and fair data valuation for horizontal federated learning. In: Federated learning, pp 139–152. Springer, ???
3. Liu Y, Kang Y, Li L, Zhang X, Cheng Y, Chen T, Hong M, Yang Q (2019) A communication efficient vertical federated learning framework. Scanning Electron Microsc Meet at
4. Cheng K, Fan T, Jin Y, Liu Y, Chen T, Papadopoulos D, Yang Q (2019) SecureBoost: a lossless federated learning framework. arXiv. https://doi.org/10.48550/ARXIV.1901.08755
5. Bouacida N, Mohapatra P (2021) Vulnerabilities in federated learning. IEEE Access 9:63229–63249. https://doi.org/10.1109/access.2021.3075203