Journal of Information Security Reserach ›› 2025, Vol. 11 ›› Issue (9): 797-.

Previous Articles     Next Articles

A Covert Backdoor Attack Method in Fewshot Class Incremental Learning

Qian Hui, Liu Yazhi, Li Wei, An Yi, and Li Siwei   

  1. School of Artificial Intelligence, North China University of Technology, Tangshan, Hebei  063210
  • Online:2025-09-30 Published:2025-09-30

一种少样本类增量学习中的隐蔽性后门攻击方法

钱慧刘亚志李伟安逸李思维   

  1. 华北理工大学人工智能学院河北唐山063210
  • 通讯作者: 李伟 硕士,副教授.主要研究方向为计算机网络安全、机器学习. lw@ncst.edu.cn
  • 作者简介:钱慧 硕士研究生.主要研究方向为计算机网络与信息安全. qianhui@stu.ncst.edu.cn 刘亚志 博士,教授.主要研究方向为AIGC、工业互联网、计算机网络与信息安全. liuyazhi@ncst.edu.cn 李伟 硕士,副教授.主要研究方向为计算机网络安全、机器学习. lw@ncst.edu.cn 安逸 硕士,高级工程师.主要研究方向为检测与控制技术及智能装置、计算机网络、网络控制. beyond@ncst.edu.cn 李思维 硕士研究生.主要研究方向为联邦学习、计算机网络与信息安全. lisiwei@stu.ncst.edu.cn

Abstract: The rapid development of deep learning has led to a sharp increase in the demand for training data, and fewshot classincremental learning has become an important technique for enhancing data integrity when training deep learning models. Users can directly download datasets or models trained using fewshot classincremental learning algorithms to improve efficiency. However, while this technology brings convenience, the security issues of the models should also raise concerns. In this paper, the backdoor attack is studied on the fewshot classincremental learning model in the image domain, and a covert backdoor attack method in fewshot class incremental learning is proposed, which carries out the backdoor attack in the initial and incremental phases, respectively: in the initial phase, the covert backdoor trigger is injected into the base dataset, and the base dataset which contains the backdoor is used for the incremental learning in place of the original data; in the incremental phase, when new batch samples arrive, select some samples to add to the trigger, and iteratively optimize the trigger during the incremental process to achieve the best triggering effect. The experimental evaluation shows that the attack success rate (ASR) of the stealthy backdoor attack method proposed in this paper can reach up to 100%, the clean test accuracy (CTA) and the clean sample model performance remain at a stable level, and at the same time, the method proposed in this paper is robust to the backdoor defense mechanism.

Key words: fewshot classincremental learning, model security, backdoor attacks, data poisoning, invisible trigger

摘要: 深度学习的快速发展导致用户对训练数据的需求急剧增加,少样本类增量学习已经成为一种在训练深度学习模型时增强数据完整性的重要技术,用户可以直接下载经过少样本类增量学习算法训练好的数据集或模型提高使用效率.然而此技术带来便利的同时模型的安全问题也应引起人们的关注.对图像领域中的少样本类增量学习模型进行了后门攻击的研究,提出一种少样本类增量学习中的隐蔽性后门攻击方法,分别在初始和增量2个阶段进行后门攻击:在初始阶段将隐蔽性后门触发器注入基础数据集,含有后门的基础数据集代替原始数据进行增量学习;在增量阶段,当新增批次样本到来时选择部分样本加入触发器,并在增量过程中迭代地优化触发器,使其具有最佳的触发效果.经实验评估表明,隐蔽性后门攻击方法的攻击成功率(attack success rate, ASR)最高可达到100%,干净样本测试准确率(clean test accuracy, CTA)与干净样本模型性能保持稳定水平,同时对后门防御机制具有鲁棒性.

关键词: 少样本类增量学习, 模型安全, 后门攻击, 数据投毒, 隐蔽性触发器

CLC Number: