[1]Khan A, Sohail A, Zahoora U, et al.A survey of the recent architectures of deep convolutional neural networks[J]. Artificial Intelligence Review, 2020, 53(4): 54555516[2]Tao Xiaoyu, Hong Xiaopeng, Chang Xinyuan, et al. Fewshot classincremental learning[C] Proc of the IEEECVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 1218312192[3]Jiang Wenbo, Zhang Tianwei, Qiu Han, et al.Incremental learning, incremental backdoor threats[J]. IEEE Trans on Dependable and Secure Computing, 2022, 21(2): 559572[4]Zhang Chi, Song Nan, Lin Guosheng, et al. Fewshot incremental learning with continually evolved classifiers[C] Proc of the IEEECVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 1245512464[5]Zhou Dawei, Wang Fuyun, Ye Hanjia, et al. Forward compatible fewshot classincremental learning[C] Proc of the IEEECVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2022: 90469056[6]Zhao Linglan, Lu Jing, Xu Yunlu, et al. Fewshot classincremental learning via classaware bilateral distillation[C] Proc of the IEEECVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2023: 1183811847[7]陈谌. 面向生成式人工智能(大语言模型)深度合成内容鉴定技术的研究[J]. 信息安全研究, 2024, 10(增刊1): 8386[8]程显淘. 针对联邦学习的恶意客户端检测及防御方法[J]. 信息安全研究, 2024, 10(2): 163169[9]Le Roux Q, Bourbao E, Teglia Y, et al. A comprehensive survey on backdoor attacks and their defenses in face recognition systems[J]. IEEE Access, 2024, 12: 4743347468[10]Gu Tianyu, Liu Kang, DolanGavitt B, et al. Badnets: Evaluating backdooring attacks on deep neural networks[J]. IEEE Access, 2019, 7: 4723047244[11]Quiring E, Rieck K. Backdooring and poisoning neural networks with imagescaling attacks[C] Proc of 2020 IEEE Security and Privacy Workshops (SPW). Piscataway, NJ: IEEE, 2020: 4147[12]Zhao Feng, Li Zhou, Zhong Qi, et al. Natural backdoor attacks on deep neural networks via raindrops[J]. Security and Communication Networks, 2022, 1(3): 45930024593010[13]Guo Wenbo, Wang Lun, Yan Xu, et al. Towards inspecting and eliminating trojan backdoors in deep neural networks[C] Proc of 2020 IEEE Int Conf on Data Mining (ICDM). Piscataway, NJ: IEEE, 2020: 162171[14]Liu Kang, DolanGavitt B, Garg S. Finepruning: Defending against backdooring attacks on deep neural networks[C] Proc of Int Symp on Research in Attacks, Intrusions, and Defenses. Berlin: Springer, 2018: 273294[15]Gao Yansong, Xu Bian, Wang Derui, et al. Strip: A defence against trojan attacks on deep neural networks[C] Proc of the 35th Annual Computer Security Applications Conference. New York: ACM, 2019: 113125
|