信息安全研究 ›› 2022, Vol. 8 ›› Issue (3): 270-.

• 深度学习安全与对抗专题 • 上一篇    下一篇

基于安全洗牌和差分隐私的联邦学习模型安全防护方法

粟勇1,2 刘文龙1,2刘圣龙3 江伊雯3   

  1. 1(南瑞集团有限公司(国网电力科学研究院有限公司),南京,211100

    2(江苏瑞中数据股份有限公司,南京,211100

    3(国家电网有限公司大数据中心,北京,北京,110000

  • 出版日期:2022-03-01 发布日期:2022-03-01
  • 通讯作者: 粟 勇 硕士,主要研究领域为电力信息化、数据安全等,284666776@qq.com
  • 作者简介:粟 勇 硕士,主要研究领域为电力信息化、数据安全等,284666776@qq.com 刘文龙 本科,主要研究领域为电力信息化, liuwenlong@sgepri.sgcc.com.cn 刘圣龙 硕士,国网大数据中心安全质量与合规部副处长,长期从事网络与数据安全工作。shenglong-liu@sgcc.com.cn 江伊雯 博士,国网大数据中心安全质量与合规处数据安全专责,主研数据安全方向。yiwen-jiang@sgcc.com.cn

  • Online:2022-03-01 Published:2022-03-01

摘要: 联邦学习可以在保证数据隐私安全及合法合规的基础上实现共同建模,但保证联邦学习模型发布的隐私性以及保证用户对模型的效用仍是亟待解决的问题.提出一种基于安全洗牌和差分隐私的联邦学习模型安全防护方法SFLSDP.联邦模型拥有者利用差分隐私技术对联邦学习的模型参数进行加噪声,生成带噪声的模型参数,之后利用用户授权密钥和安全洗牌算法加密模型参数,并将加密的联邦学习模型参数发送给用户;用户在本地使用联邦学习模型时,利用用户授权密钥和安全洗牌算法解密模型参数密文,得到带噪声的联邦学习模型,将自己的数据作为该模型的输入就能得到期望的输出结果.实验证明该方法可以保护原始学习模型的隐私性,同时获得较高的效用.


关键词:

Abstract: On the premise of protecting user privacy, ensuring data security and legal compliance, integrating data from various industry organizations has become a major problem for artificial intelligence practitioners. This paper proposes a Security protection method for Federated Learning models based on secure Shuffling and Differential Privacy (SFLSDP). The owner of the federated model uses differential privacy technology to add noise to the model parameters of federated learning to generate noisy model parameters, and then use the authorization key and the secure shuffling algorithm encrypt the model parameters, and send the encrypted federated learning model parameters to the user. When the user uses the federated learning model locally, he firstly uses the authorization key and the secure shuffling algorithm to decrypt the model parameter ciphertext, and obtain a noisy federated learning model. The users can get the desired output results by taking their own data as the input of the model. Experiments show that this method can protect the privacy of the original learning model and obtain high utility at the same time.


Key words: federated learning, differential privacy, shuffling algorithm, privacy protection, data security