Journal of Information Security Reserach ›› 2022, Vol. 8 ›› Issue (3): 223-.

Previous Articles     Next Articles

A Survey on Threats to Federated Learning

  

  • Online:2022-03-01 Published:2022-03-01

联邦学习安全威胁综述

王坤庆1 刘婧2 赵语杭3 吕浩然3 李鹏1 刘炳莹1   

  1. 1(中国人民武装警察部队 北京  100089)

    2(齐鲁师范学院生命科学学院 济南 250200)

    3(北京理工大学网络空间安全学院 北京100081)

  • 通讯作者: 王坤庆硕士,工程师.主要研究方向为网络与系统安全和智能对抗.282522085@qq.com
  • 作者简介:王坤庆硕士,工程师.主要研究方向为网络与系统安全和智能对抗.282522085@qq.com 刘婧博士,副教授.主要研究方向为生物信息学.Liujing_1205@163.com 赵语杭博士研究生.主要研究方向为人工智能安全.zhaoyuhang@bit.edu.cn 吕浩然硕士.主要研究方向为机器学习和智能对抗攻击.lyuhaoran@bit.edu.cn 李鹏本科.主要研究方向为网络安全、信息管理与信息系统应用.723352284@qq.com 刘炳莹本科.主要研究方向为信息安全和信息系统应用.174432256@qq.com

Abstract: At present, federated learning has been considered as an effective solution to solve data island and privacy protection. Its own security and privacy protection issues have attracted widespread attentions from industry and academia. The existing federated learning systems have been proven to have vulnerabilities. These vulnerabilities can be exploited by adversaries, whether within or without the system, to destroy data security.  Firstly, this paper introduces the concept, classification and threat models of federated learning in specific scenarios. Secondly, it introduces the confidentiality, integrity, and availability (CIA) model of federated learning. Then, it carries out a classification study on the attack methods that destroy the federated learning CIA model. Finally, it explores the current challenges and future research directions of federated learning CIA model.

Key words: federated learning, privacy leakage, confidentiality integrity and availability (CIA) model, membership attack, generative adversarial network (GAN) attack

摘要: 当前,联邦学习已被认为是解决数据孤岛和隐私保护的有效解决方案,其自身安全性和隐私保护问题一直备受工业界和学术界关注。现有的联邦学习系统已被证明存在诸多漏洞,这些漏洞可能被联邦学习系统内部或外部的攻击者所利用,破坏联邦学习数据的安全性。首先对特定场景下联邦学习的概念、分类和威胁模型进行介绍;其次介绍联邦学习的机密性、完整性、可用性(CIA)模型;然后对破坏联邦学习CIA模型的攻击方法进行分类研究;最后对CIA模型当前面临的问题挑战和未来研究方向进行分析和总结。

关键词: 联邦学习, 隐私泄露, 机密性、完整性、可用性模型, 成员攻击, 生成对抗网络攻击