| [1]Zhang Shaobo, Pan Yimeng, Liu Qin, et al .Backdoor attacks and defenses targeting multidomain AI models: A comprehensive review[J]. ACM Computing Surveys, 2025, 57(4): 135[2]Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks[J]. arXiv preprint, arXiv:1312.6199, 2013[3]Gu T, DolanGavitt B, Garg S. Badnets: Identifying vulnerabilities in the machine learning model supply chain[J]. arXiv preprint, arXiv:1708.06733, 2017[4]Liu Y, Ma S, Aafer Y, et al. Trojaning attack on neural networks[C] Proc of the 25th Annual Network and Distributed System Security Symposium (NDSS 2018). Digco, CA: Internet Society, 2018[5]Chen X, Liu C, Li B, et al. Targeted backdoor attacks on deep learning systems using data poisoning[J]. arXiv preprint, arXiv:1712.05526, 2017[6]Nguyen T A, Tran A. Inputaware dynamic backdoor attack[J]. Advances in Neural Information Processing Systems, 2020, 33: 34543464[7]Nguyen A, Tran A. Wanetimperceptible warpingbased backdoor attack[J]. arXiv preprint, arXiv:2102.10369, 2021[8]Zhu M, Wei S, Shen L, et al. Enhancing finetuning based backdoor defense with sharpnessaware minimization[C] Proc of IEEECVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2023: 44664477[9]Li Y, Lyu X, Ma X, et al. Reconstructive neuron pruning for backdoor defense[C] Proc of Int Conf on Machine Learning. New York: PMLR, 2023: 1983719854[10]Wu B, Chen H, Zhang M, et al. Backdoorbench: A comprehensive benchmark and analysis of backdoor learning[J]. International Journal of Computer Vision, 2025, 133(8): 57005787[11]Li Y, Lyu X, Koren N, et al. Neural attention distillation: Erasing backdoor triggers from deep neural networks[J]. arXiv preprint, arXiv:2101.05930, 2021[12]郑嘉熙, 陈伟, 尹萍, 等. 基于可解释性的不可见后门攻击研究[J]. 信息安全研究, 2025, 11(1): 2127[13]Wang B, Yao Y, Shan S, et al. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks[C] Proc of 2019 IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2019: 707723[14]Liu K, DolanGavitt B, Garg S. Finepruning: Defending against backdooring attacks on deep neural networks[C] Porc of International Symp on Research in Attacks, Intrusions, and Defenses. Berlin: Springer, 2018: 273294[15]Zheng R, Tang R, Li J, et al. Datafree backdoor removal based on channel lipschitzness[C] Proc of European Conference on Computer Vision. Berlin: Springer, 2022: 175191[16]Kirkpatrick J, Pascanu R, Rabinowitz N, et al. Overcoming catastrophic forgetting in neural networks[J]. Proceedings of the National Academy of Sciences, 2017, 114(13): 35213526 |