[1]McMahan H B, Moore E, Ramage D, et al. Communicationefficient learning of deep networks from decentralized data[C] Proc of the 20th Int Conf on Artificial Intelligence and Statistics. San Diego, CA: JMLR, 2017: 12731282[2]周传鑫, 孙奕, 汪德刚, 等. 联邦学习研究综述[J]. 网络与信息安全学报, 2021, 7(5): 7792[3]王坤庆, 刘婧, 赵语杭, 等. 联邦学习安全威胁综述[J]. 信息安全研究, 2022, 8(3): 223234[4]Nguyen T D, Nguyen T, Nguyen P L, et al. Backdoor attacks and defenses in federated learning: Survey, challenges and future research directions[J]. Engineering Applications of Artificial Intelligence, 2024, 127(A): 107166107170[5]Wang H, Sreenivasan K, Rajput S, et al. Attack of the tails: Yes, you really can backdoor federated learning[C] Proc of the 34th Int Conf on Neural Information Processing Systems. San Diego, CA: NeurIPS, 2020: 1607016084[6]Yoo K Y, Kwak N. Backdoor attacks in federated learning by rare embeddings and gradient ensembling[C] Proc of the 2022 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2022: 7288[7]Zhang J, Chen B, Cheng X, et al. PoisonGAN: Generative poisoning attacks against federated learning in edge computing systems[J]. IEEE Internet of Things Journal, 2021, 8(5): 33103322[8]Xie C, Huang K, Chen P Y, et al. DBA: Distributed backdoor attacks against federated learning[C] Proc of the Int Conf on Learning Representations. Washington DC: ICLR, 2020: 10971112[9]Gong X, Chen Y, Huang H, et al. Coordinated backdoor attacks against federated learning with modeldependent triggers[J]. IEEE Network, 2022, 36(1): 8490[10]Bagdasaryan E, Vei T A, Hua Y, et al. How to backdoor federated learning[C] Proc of the Int Conf on Artificial Intelligence and Statistics. New York: PMLR, 2020: 29382948[11]Baruch M, Baruch G, Goldberg Y. A little is enough: Circumventing defenses for distributed learning[C] Proc of the 33rd Int Conf on Neural Information Processing Systems. San Diego, CA: NeurIPS, 2019: 86358645[12]Zhou X, Xu M, Wu Y, et al. Deep model poisoning attack on federated learning[J]. Future Internet, 2021, 13(3): 7373[13]Zhang Z, Panda A, Song L, et al. Neurotoxin: Durable backdoors in federated learning[C] Proc of the 39th Int Conf on Machine Learning. New York: ICML, 2022: 2642926446[14]Fang P, Chen J. On the vulnerability of backdoor defenses for federated learning[C] Proc of the 37th AAAI Conf on Artificial Intelligence. Menlo Park, CA: AAAI, 2023: 1180011808[15]Zhang Z, Cao X, Jia J, et al. FLDetector: Defending federated learning against model poisoning attacks via detecting malicious clients[C] Proc of the 28th ACM SIGKDD Conf on Knowledge Discovery and Data Mining. New York: ACM, 2022: 25452555[16]Nguyen T D, Rieger P, Chen H, et al. FLAME: Taming backdoors in federated learning[C] Proc of the 31st USENIX Security Symposium. Berkeley, CA: USENIX Association, 2022: 14151432[17]Rieger P, Nguyen T D, Miettinen M, et al. DeepSight: Mitigating backdoor attacks in federated learning through deep model inspection[C] Proc of the Symp on Network and Distributed System Security. San Diego, CA: NDSS, 2022: 2428[18]Gill W, Anwar A, Gulzar M A. FedDefender: Backdoor attack defense in federated learning[C] Proc of the 1st Int Workshop on Dependability and Trustworthiness of SafetyCritical Systems with Machine Learned Components. New York: ACM, 2023: 69[19]Kumari K, Rieger P, Fereidooni H, et al. BayBFed: Bayesian backdoor defense for federated learning[C] Proc of the 2023 IEEE Symp on Security and Privacy. Piscataway, NJ: IEEE, 2023: 737754[20]Zhang K, Tao G, Xu Q, et al. FLIP: A provable defense framework for backdoor mitigation in federated learning[C] Proc of the Int Conf on Learning Representations. Washington: ICLR, 2023[21]Sun J, Li A, DiValentin L, et al. FLWBC: Enhancing robustness against model poisoning attacks in federated learning from a client perspective[C] Proc of the 35th Conf on Neural Information Processing Systems. San Diego, CA: NeurIPS, 2021: 1261312624[22]Wang N, Xiao Y, Chen Y, et al. FLARE: Defending federated learning against model poisoning attacks via latent space representations[C] Proc of the 2022 ACM on Asia Conf on Computer and Communications Security. New York: ACM, 2022: 946958[23]Andreina S, Marson G A, Mllering H, et al. BaFFLe: Backdoor detection via feedbackbased federated learning[C] Proc of the 41st Int Conf on Distributed Computing Systems. Piscataway, NJ: IEEE, 2021: 852863[24]Ozdayi M S, Kantarcioglu M, Gel Y R. Defending against backdoors in federated learning with robust learning rate[C] Proc of the AAAI Conf on Artificial Intelligence. Menlo Park, CA: AAAI, 2021: 92689276[25]Xie C, Chen M, Chen P Y, et al. CRFL: Certifiably robust federated learning against backdoor attacks[C] Proc of the 38th Int Conf on Machine Learning. New York: PMLR, 2021: 1137211382[26]Wu C, Yang X, Zhu S, et al. Mitigating backdoor attacks in federated learning[J]. arXiv preprint, arXiv:2011.01767, 2020[27]Hao Y, Chuan M, Meng L, et al. G2uardFL: Safeguarding federated learning against backdoor attacks through attributed client graph clustering[J]. arXiv preprint, arXiv:2306.04984, 2023
|