[1]Gu T, DolanGavitt B, Garg S. Badnets: Identifying vulnerabilities in the machine learning model supply chain[J]. arXiv preprint, arXiv:1708.06733, 2017[2]Chen X, Liu C, Li B, et al. Targeted backdoor attacks on deep learning systems using data poisoning[J]. arXiv preprint, arXiv:1712.05526, 2017[3]Ribeiro M T, Singh S, Guestrin C. Why should i trust you? Explaining the predictions of any classifier[C] Proc of the 22nd ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining. New York: ACM, 2016: 11351144[4]Gong Xueluan, Chen Yanjiao, Dong Jianshuo, et al. ATTEQNN: Attentionbased QoEaware evasive backdoor attacks[C] Proc of the 29th Annual Network and Distributed System Security Symposium (NDSS 2022). San Diego, CA: Internet Society, 2022: 118[5]Barni M, Kallas K, Tondi B. A new backdoor attack in cnns by training set corruption without label poisoning[C] Proc of 2019 IEEE Int Conf on Image Processing (ICIP). Piscataway, NJ: IEEE, 2019: 101105[6]Zou M, Shi Y, Wang C, et al. Potrojan: Powerful neurallevel trojan designs in deep learning models[J]. arXiv preprint, arXiv:1802.03043, 2018[7]Bagdasaryan E, Shmatikov V. Blind backdoors in deep learning models[C] Proc of the 30th USENIX Security Symposium (USENIX Security 21). Berkeley, CA: USENIX Association, 2021: 15051521[8]Xue M, He C, Wang J, et al. OnetoN & Ntoone: Two advanced backdoor attacks against deep learning models[J]. IEEE Trans on Dependable and Secure Computing, 2020, 19(3): 15621578[9]Wang B, Yao Y, Shan S, et al. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks[C] Proc of 2019 IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2019: 707723[10]刘亦纯, 张光华, 宿景芳. 基于多级度量差值的神经网络后门检测方法[J]. 信息安全研究, 2023, 9(6): 587592[11]Chen B, Carvalho W, Baracaldo N, et al. Detecting backdoor attacks on deep neural networks by activation clustering[J]. arXiv preprint, arXiv:1811.03728, 2018[12]Gao Y, Xu C, Wang D, et al. Strip: A defence against trojan attacks on deep neural networks[C] Proc of the 35th Annual Computer Security Applications Conference. New York: ACM, 2019: 113125[13]Wang Y, Zhao M, Li S, et al. Dispersed pixel perturbationbased imperceptible backdoor trigger for image classifier models[J]. IEEE Trans on Information Forensics and Security, 2022, 17: 30913106[14]Liu Y, Ma S, Aafer Y, et al. Trojaning attack on neural networks[C] Proc of the 25th Annual Network and Distributed System Security Symposium (NDSS 2018). San Diego, CA: Internet Society, 2018: 115[15]Gong X, Chen Y, Wang Q, et al. Defenseresistant backdoor attacks against deep neural networks in outsourced cloud environment[J]. IEEE Journal on Selected Areas in Communications, 2021, 39(8): 26172631 |