1. Awan, S., Luo, B., Li, F.: Contra: Defending against poisoning attacks in federated learning. In: European Symposium on Research in Computer Security (2021). https://par.nsf.gov/biblio/10294585
2. Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. In: Chiappa, S., Calandra, R. (eds.) Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 108, pp. 2938–2948 (2020). https://proceedings.mlr.press/v108/bagdasaryan20a.html
3. Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. In: Proceedings of the 29th International Coference on International Conference on Machine Learning, pp. 1467–1474. ICML 12. Omnipress, Madison (2012)
4. Cao, X., Gong, N.: Mpaf: Model poisoning attacks to federated learning based on fake clients. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 3395–3403. IEEE Computer Society, Los Alamitos (2022). https://doi.org/10.1109/CVPRW56347.2022.00383, https://doi.ieeecomputersociety.org/10.1109/CVPRW56347.2022.00383
5. Chen, X., Liu, G.: Adaptive lazily aggregation based on error accumulation. In: 2023 4th International Conference on Electronic Communication and Artificial Intelligence (ICECAI), pp. 74–77 (2023). https://doi.org/10.1109/ICECAI58670.2023.10176452