1. Geiping, J., Bauermeister, H., Dröge, H., Moeller, M.: Inverting gradients-how easy is it to break privacy in federated learning? In: Advances in Neural Information Processing Systems, vol. 33, pp. 16937–16947 (2020)
2. Li, Z., Zhang, J., Liu, L., Liu, J.: Auditing privacy defenses in federated learning via generative gradient leakage. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10132–10142 (2022)
3. Zhang, C., Li, S., Xia, J., Wang, W., Yan, F., Liu, Y.: Batchcrypt: efficient homomorphic encryption for cross-silo federated learning. In: 2020 USENIX Annual Technical Conference (USENIX ATC 2020), pp. 493–506 (2020)
4. Chen, Y., Wang, B., Zhang, Z.: PDLHR: privacy-preserving deep learning model with homomorphic re-encryption in robot system. IEEE Syst. J. 16(2), 2032–2043 (2021)
5. Hu, R., Gong, Y., Guo, Y.: Federated learning with sparsified model perturbation: improving accuracy under client-level differential privacy. arXiv preprint arXiv:2202.07178 (2022)