Journal of Information Security Reserach ›› 2022, Vol. 8 ›› Issue (4): 357-.

Previous Articles     Next Articles

DCR Defense Mechanism of Federated Learning Model for  Data Governance Poison

  

  • Online:2022-04-10 Published:2022-04-10

面向数据安全治理的联邦学习模型投毒DCR防御机制

黄湘洲1,2彭长根1,2,3谭伟杰1,2,3李震4
  

  1. 1(公共大数据国家重点实验室(贵州大学)贵阳550025)
    2(贵州大学计算机科学与技术学院贵阳550025)
    3(贵州大学贵州省大数据产业发展应用研究院贵阳550025)
    4(贵州大学大数据与信息工程学院贵阳550025)
  • 通讯作者: 彭长根 peng_stud@163.com

Abstract: Federated learning is a new mode of data security governance, which can make available data invisible, but federated learning is facing the threat of model poisoning attack, and its security needs to be improved. To this end, a Dynamic Cacheable Revocable (DCR) model poison defense mechanism based on federated learning is proposed. Based on the lossbased model poisoning defense method, the Dynamic threshold is calculated and used before each iteration. It makes the enemy unable to know the defense mechanism a priori, which increases the difficulty of the enemy’s attack. Moreover, the buffer period is set in the mechanism to reduce the risk of benign nodes being “killed by mistake”. At the same time, the system stores the global model parameters of each round. In case of model poisoning, the global model parameters before the round in buffer period are reloaded to achieve callback. The callable setting can reduce the negative impact of model poisoning attack on the global model, so that the federated learning model can still achieve convergence with good performance after being attacked, and ensure the security and performance of the federated learning model. Finally, in the experimental environment of TFF, the defense effect and model performance of this mechanism are verified.Key words data governance; federated learning; model poisoning; malicious node; dynamic cacheable revocable

Key words: data governance, federated learning, model poisoning, malicious node, dynamic cacheable revocable

摘要: 联邦学习能够实现数据的可用不可见,是数据安全治理的一种新模式,但是联邦学习同时面临模型投毒攻击的威胁,安全性亟需提升.针对该问题,提出了一种基于联邦学习的动态缓冲可回调(dynamic cacheable revocable, DCR)模型投毒防御机制.该机制基于损失的模型投毒防御方法,每一轮迭代之前计算并使用动态阈值,使得敌手无法先验地了解防御机制,提升了敌手的攻击难度.同时在机制中设置缓冲期轮次,降低了良性节点被“误杀”的风险.且系统存储每一轮的全局模型参数,若遭受模型投毒可重新加载缓冲期轮次前的全局模型参数,实现可回调.可回调的设置能够减少模型投毒攻击对全局模型的负面影响,使得联邦学习模型在遭受攻击行为之后仍能以较好的性能达到收敛,保证了联邦学习模型的安全与性能.最后在TFF(TensorFlowFederated)的实验环境下,验证了该机制的防御效果与模型性能.关键词数据治理;联邦学习;模型投毒;恶意节点;动态缓冲可回调

关键词: 数据治理, 联邦学习, 模型投毒, 恶意节点, 动态缓冲可回调