[1]Airouz P, Mcmahan H B, Avent B, et al. Advances and open problems in federated learning[J]. Foundations and Trends in Machine Learning, 2021, 14(12): 1210[2]Bagdasaryan E, Veit A, Hua Y, et al. How to backdoor federated learning[C] Proc of the 2020 Int Conf on Artificial Intelligence and Statistics. New York: PMLR, 2020: 29382948[3]陈学斌, 屈昌盛. 面向联邦学习的后门攻击与防御综述[J]. 计算机应用, 2024, 44(11): 34593469[4]Liu T, Zhang Y, Feng Z, et al. Beyond traditional threats: A persistent backdoor attack on federated learning[C] Proc of the AAAI Conf on Artificial Intelligence. Menlo Park, CA: AAAI, 2024: 2135921367[5]Umer M, Dawson G, Polikar R. Targeted forgetting and false memory formation in continual learners through adversarial backdoor attacks[C] Proc of the 2020 Int Joint Conf on Neural Networks (IJCNN). Piscataway, NJ: IEEE, 2020: 18[6]Liu B, Lyu N, Guo Y, et al. Recent advances on federated learning: A systematic survey[J]. Neurocomputing, 2024, 597(7): 128019[7]Mcmahan B, Moore E, Ramage D, et al. Communicationefficient learning of deep networks from decentralized data[C] Proc of Artificial Intelligence and Statistics. New York: PMLR, 2017: 12731282[8]Liu Z, Guo J, Yang W, et al. Privacypreserving aggregation in federated learning: A survey[J]. arXiv prepring, arXiv:2203.17005, 2022[9]Liu Y, Ma S, Aafer Y, et al. Trojaning attack on neural networks[C] Proc of the 25th Annual Network and Distributed System Security Symposium (NDSS 2018). San Diego: Internet Society, 2018: 115[10]Gong X, Chen Y, Huang H, et al. Coordinated backdoor attacks against federated learning with modeldependent triggers[J]. IEEE Network, 2022, 36(1): 8490[11]周景贤, 韩威, 张德栋, 等. 一种抗标签翻转攻击的联邦学习方法[J]. 信息安全研究, 2025, 11(3): 205213[12]Nguyen T D, Nguyen T, Le Nguyen P, et al. Backdoor attacks and defenses in federated learning: Survey, challenges and future research directions[J]. Engineering Applications of Artificial Intelligence, 2024, 127: 107166[13]Shejwalkar V, Houmansadr A, Kairouz P, et al. Back to the drawing board: A critical evaluation of poisoning attacks on production federated learning[C] Proc of IEEE Symp on Security and Privacy. Piscataway, NJ: IEEE, 2022: 13541371[14]Xie C, Huang K, Chen P Y, et al. Dba: Distributed backdoor attacks against federated learning[C] Proc of the 2020 Int Conf on Learning Representations. Washington: ICLR, 2020: 119[15]屈昌盛, 陈学斌, 任志强, 等. 基于离散余弦变换的联邦学习后门攻击[JOL]. 郑州大学学报: 理学版, 2024 [20250507]. https:dio.org10.13705j.issn.16716841.2024121[16]Madry A, Makelov A, Schmidt L, et al. Towards deep learning models resistant to adversarial attacks[J].arXiv preprint, arXiv:1706.06083, 2017[17]Loshchilov I, Hutter F. SGDR: Stochastic gradient descent with warm restarts[J]. arXiv preprint, arXiv:1608.03983, 2016[18]Cao X, Jia J, Gong N Z. Provably secure federated learning against malicious clients[J]. Proceedings of the AAAI Conf on Artificial Intelligence, 2021, 35(8): 68856893[19]He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition[C] Proc of IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2016: 770778[20]Lecun Y, Bottou L, Bengio Y, et al. Gradientbased learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 22782324[21]Xiao H, Rasul K, Vollgraf R. FashionMNIST: A novel image dataset for benchmarking machine learning algorithms[J]. arXiv preprint, arXiv:1712.0026, 2017[22]Krizhevsky A, Hinton G. Learning multiple layers of features from tiny images[J]. Handbook of Systemic Autoimmune Diseases, 2009, 1(4): 160[23]Chen X, Liu C, Li B, et al. Targeted backdoor attacks on deep learning systems using data poisoning[J]. arXiv preprint, arXiv:1712.05526, 2017
|