Journal of Information Security Reserach ›› 2024, Vol. 10 ›› Issue (3): 194-.

    Next Articles

Research on Privacy Protection Technology in Federated Learning

Liu Xiaoqian1, Xu Fei1, Ma Zhuo1, Yuan Ming1,2, and Qian Hanwei1,3#br#

#br#
  

  1. 1(Department of Computer Information and Cyber Security, Jiangsu Police Institute, Nanjing 210031)
    2(School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023)
    3(Software Institute, Nanjing University, Nanjing 210023)

  • Online:2024-03-23 Published:2024-03-08

联邦学习中的隐私保护技术研究

刘晓迁1许飞1马卓1袁明1,2钱汉伟1,3


  

  1. 1(江苏警官学院计算机信息与网络安全系南京210031)
    2(南京邮电大学计算机学院南京210023)
    3(南京大学软件学院南京210023)

  • 通讯作者: 刘晓迁 博士,讲师.主要研究方向为数据挖掘与隐私保护. lxqlara@163.com
  • 作者简介:刘晓迁 博士,讲师.主要研究方向为数据挖掘与隐私保护. lxqlara@163.com 许飞 主要研究方向为警务大数据. 1275723335@qq.com 马卓 博士,讲师.主要研究方向为联邦学习与隐私保护. mazhuo@jspi.cn 袁明 博士研究生,讲师.主要研究方向为深度学习与自然语言处理. yuanming@jspi.cn 钱汉伟 博士研究生,讲师.主要研究方向为信息安全与软件工程. qianhanwei@jspi.cn

Abstract: In federated learning, multiple models are trained through parameter coordination without sharing raw data. However,  the extensive parameter exchange in this process renders the model vulnerable to threats not only from external users but also from internal participants. Therefore, research on privacy protection techniques in federated learning is crucial. This paper introduces the current research status on privacy protection in federated learning. It classifies the security threats of federated learning into external attacks and internal attacks.Based on this classification,  it summarizes external attack techniques such as model inversion attacks, external reconstruction attacks, and external inference attacks, as well as internal attack techniques such as poisoning attacks, internal reconstruction attacks, and internal inference attacks. From the perspective of attack and defense correspondence, this paper summarizes data perturbation techniques such as central differential privacy, local differential privacy, and distributed differential privacy, as well as process encryption techniques such as homomorphic encryption, secret sharing, and trusted execution environment. Finally, the paper analyzes the difficulties of federated learning privacy protection technology and identifies the key directions for its improvement.

Key words: federated learning, privacy attack, differential privacy, homomorphic encryption, privacy protection

摘要: 联邦学习中多个模型在不共享原始数据的情况下通过参数协调进行训练.大量的参数交换使模型不仅容易受到外部使用者的威胁,还会遭到内部参与方的攻击,因此联邦学习中的隐私保护技术研究至关重要.介绍了联邦学习中的隐私保护研究现状;将联邦学习的安全威胁分为外部攻击和内部攻击,并以此分类为基础归纳总结了模型反演攻击、外部重建攻击、外部推断攻击等外部攻击技术和投毒攻击、内部重建攻击、内部推断攻击等内部攻击技术.从攻防对应的角度,归纳总结了中心化差分隐私、本地化差分隐私和分布式差分隐私等数据扰动技术和同态加密、秘密共享和可信执行环境等过程加密技术.最后,分析了联邦学习隐私保护技术的难点,指出了联邦学习隐私保护技术提升的关键方向.

关键词: 联邦学习, 隐私攻击, 差分隐私, 同态加密, 隐私保护

CLC Number: