| [1]Li Q, Wen Z, Wu Z, et al. A survey on federated learning systems: Vision, hype and reality for data privacy and protection[J]. IEEE Trans on Knowledge and Data Engineering, 2021, 35(4): 33473366[2]Yang Q, Liu Y, Chen T, et al. Federated machine learning: Concept and applications[J]. ACM Trans on Intelligent Systems and Technology, 2019, 10(2): 119[3]Bhagoji A N, Chakraborty S, Mittal P, et al. Analyzing federated learning through an adversarial lens[C] Proc of Int Conf on Machine Learning. New York: PMLR, 2019: 634643[4]Bagdasaryan E, Veit A, Hua Y, et al. How to backdoor federated learning[C] Proc of Int Conf on Artificial Intelligence and Statistics. New York: PMLR, 2020: 29382948[5]Fung C, Yoon C J M, Beschastnikh I. Mitigating sybils in federated learning poisoning[J]. arXiv preprint, arXiv:1808.04866, 2018[6]Biggio B, Nelson B, Laskov P. Poisoning attacks against support vector machines[J]. arXiv preprint, arXiv:1206.6389, 2012[7]刘晓迁, 许飞, 马卓, 等. 联邦学习中的隐私保护技术研究[J]. 信息安全研究, 2024, 10(3): 194201[8]Abadi M, Chu A, Goodfellow I, et al. Deep learning with differential privacy[C] Proc of the 2016 ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2016: 308318[9]Nguyen T D, Rieger P, De Viti R, et al. FLAME: Taming backdoors in federated learning[C] Proc of the 31st USENIX Security Symposium (USENIX Security 22). Berkeley, CA: USENIX Association, 2022: 14151432[10]余晟兴, 陈泽凯, 陈钟, 等. DAGUARD: 联邦学习下的分布式后门攻击防御方案[J]. 通信学报, 2023, 44(5): 110122[11]肖迪, 余柱阳, 李敏, 等. 基于差分隐私与模型聚类的安全联邦学习方案[J]. 计算机工程与科学, 2024, 46(9): 16061615[12]Blanchard P, El Mhamdi E M, Guerraoui R, et al. Machine learning with adversaries: Byzantine tolerant gradient descent[C] Proc of the Advances in Neural Information Processing Systems 30 (NeurIPS 2017). New York: ACM, 2017: 119129[13]Huang S, Li Y, Chen C, et al. Multimetrics adaptively identifies backdoors in federated learning[C] Proc of the IEEECVF Int Conf on Computer Vision (ICCV 2023). Piscataway, NJ: IEEE, 2023: 46524662[14]Huang S, Li Y, Yan X, et al. Scope: On detecting constrained backdoor attacks in federated learning[J]. IEEE Trans on Information Forensics and Security, 2025, 20: 33023315[15]Wang H, Sreenivasan K, Rajput S, et al. Attack of the tails: Yes, you really can backdoor federated learning[C] Proc of the Advances in Neural Information Processing Systems 33 (NeurIPS 2020). San Diego, CA: Curran Associates, 2020: 1607016084 |