Journal of Information Security Reserach ›› 2026, Vol. 12 ›› Issue (3): 210-.

Previous Articles     Next Articles

Federated Learning Backdoor Attack Based on Constrained Perturbation and Loss Regulation

Zhang Zhenbo, Zhang Shufen, Qu Changsheng, Zhong Qi, and Li Tao#br#

#br#
  

  1. (College of Science, North China University of Science and Technology, Tangshan, Hebei 063210)
    (Hebei Province Key Laboratory of Data Science and Application (North China University of Science and Technology), Tangshan, Hebei 063210)
    (Tangshan Key Laboratory of Data Science (North China University of Science and Technology), Tangshan, Hebei 063210)
  • Online:2026-03-12 Published:2026-03-12

基于约束扰动与损失调控的联邦学习后门攻击

张镇博张淑芬屈昌盛钟琪李涛   

  1. (华北理工大学理学院河北唐山063210)
    (河北省数据科学与应用重点实验室(华北理工大学)河北唐山063210)
    (唐山市数据科学重点实验室(华北理工大学)河北唐山063210)
  • 通讯作者: 张淑芬 硕士,教授,硕士生导师.主要研究方向为网络安全、数据安全、隐私保护. zhsf@ncst.edu.cn
  • 作者简介:张镇博 硕士研究生.主要研究方向为数据安全、隐私保护. zhangzb6@stu.ncst.edu.cn 张淑芬 硕士,教授,硕士生导师.主要研究方向为网络安全、数据安全、隐私保护. zhsf@ncst.edu.cn 屈昌盛 硕士.主要研究方向为数据安全、隐私保护. 958518830@qq.com 钟琪 硕士研究生.主要研究方向为数据安全、隐私保护. zhongqi@stu.ncst.edu.cn 李涛 硕士研究生.主要研究方向为数据安全、隐私保护. litao@stu.ncst.edu.cn

Abstract: Federated learning, as a distributed machine learning framework, enables multiparty collaborative training with data isolation and privacy protection, However, its decentralized architecture makes it vulnerable to backdoor attacks. This paper proposes a federated learning backdoor attack method based on the constrained perturbation and loss regulation (CPR). The method realizes backdoor implantation and proliferation through three modules: input perturbation, dynamic weight regulation, and secondary perturbation reinforcement. Input perturbation introduces constraintbased noise to poison the training samples. Dynamic weight regulation dynamically adjusts the task weights by introducing cosine annealing, which realizes the balance between backdoor feature learning and main task performance. Secondary perturbation reinforcement utilizes dynamic loss values to further perturb the poisoned samples and reinforce its backdoor features. The CPR backdoor attack is evaluated on MNIST, FashionMNIST and CIFAR10 datasets, and the experimental results show that the CPR backdoor attack is able to significantly improve the success rate of the attack while maintaining the accuracy of the model’s primary task and exhibits higher stealth and persistence under a variety of data distribution conditions, as compared to pixel, labelflipping and hybrid attacks.

Key words: federated learning, backdoor attack, constraint perturbation, loss regulation, dynamic weight regulation

摘要: 联邦学习作为一种分布式机器学习框架,能够在数据隔离和隐私保护的前提下实现多方协作训练,但其分布式特性使其容易成为后门攻击的目标.提出基于约束扰动与损失调控(constrained perturbation and loss regulation, CPR)的联邦学习后门攻击方法.该方法通过输入扰动、动态权重调控、2次扰动强化3个模块,实现了后门的植入与扩散.输入扰动通过对样本添加约束噪声毒化样本.动态权重调控通过引入余弦退火动态调整任务权重,实现了后门特性学习与主任务性能的平衡.2次扰动强化利用动态损失值进一步对毒化样本进行扰动,强化其后门特征.在MNIST,FashionMNIST,CIFAR10数据集上对CPR后门攻击进行评估,实验结果表明,与像素攻击、标签翻转攻击和混合攻击相比,CPR后门攻击在维持模型主任务准确率的同时,能够显著提高攻击成功率,并在多种数据分布条件下表现出更高的隐蔽性和持久性.

关键词: 联邦学习, 后门攻击, 约束扰动, 损失调控, 动态权重调控

CLC Number: