Journal of Information Security Reserach ›› 2022, Vol. 8 ›› Issue (3): 223-.
Previous Articles Next Articles
Online:
2022-03-01
Published:
2022-03-01
王坤庆1 刘婧2 赵语杭3 吕浩然3 李鹏1 刘炳莹1
1(中国人民武装警察部队 北京 100089)
2(齐鲁师范学院生命科学学院 济南 250200)
3(北京理工大学网络空间安全学院 北京100081)
通讯作者:
王坤庆硕士,工程师.主要研究方向为网络与系统安全和智能对抗.282522085@qq.com
作者简介:
王坤庆硕士,工程师.主要研究方向为网络与系统安全和智能对抗.282522085@qq.com
刘婧博士,副教授.主要研究方向为生物信息学.Liujing_1205@163.com
赵语杭博士研究生.主要研究方向为人工智能安全.zhaoyuhang@bit.edu.cn
吕浩然硕士.主要研究方向为机器学习和智能对抗攻击.lyuhaoran@bit.edu.cn
李鹏本科.主要研究方向为网络安全、信息管理与信息系统应用.723352284@qq.com
刘炳莹本科.主要研究方向为信息安全和信息系统应用.174432256@qq.com
[1]Yang Q, Liu Y, Chen T, et al. Federated Machine Learning: Concept and Applications[J]. ACM Transactions on Intelligent Systems and Technology, 2019, 10(2):1-19 [2]KantarciogluM, Clifton C. Privacy-preserving distributed mining of association rules on horizontally partitioned data[J]. IEEE Transactions on Knowledge & Data Engineering, 2004, 16(9):1026-1037 [3]Vaidya J, Clifton C W. Privacy-Preserving Kth Element Score over Vertically Partitioned Data[J]. IEEE Transactions on Knowledge & Data Engineering, 2008, 21(2):253-258 [4]Yang Qiang, YLiu ang, Cheng Yong, et al. Federated Learning[M].Williston: Morgan & Claypool Publishers, 2019 [5]Mcmahan H B, Ramage D,Talwar K, et al. Learning Differentially Private Recurrent Language Models[EB/OL]. (2017-10-18)[2021-12-01].https://arxiv.org/abs/1710.06963v1 [6]Liu Yang, Huang Anbu, LuoYun, et al. Fedvision: An online visual object detection platform powered by federated learning[EB/OL].(2020-01-17)[2021-12-01].https://arxiv.org/abs/2001.06202 [7]Lyu L, Yu H, Yang Q. Threats to Federated Learning: A Survey[EB/OL]. (2020-03-04)[2021-11-12].https://arxiv.org/abs/2003.02133 [8]Zhu L, Han S.Deep Leakage from Gradients [EB/OL]. (2019-06-21)[2021-11-12].https://arxiv.org/abs/1906.08935v2 [9]AonoY, Hayashi T, Wang L, et al. Privacy preserving deep learning: Revisited and Enhanced[EB/OL]. (2017-06-23)[2021-12-02].https://link.springer.com/chapter/10.1007/978-981-10-5421-1_9 [10]宋蕾,马春光,段广晗等. 基于数据纵向分布的隐私保护逻辑回归[J]. 计算机研究与发展, 2019, 56(10): 2243-2249 [11] Jiang W, Li H, Liu S, et al. A flexible poisoning attack against machinelearning[C]//Proc of the 2019 IEEE International Conference on Communications (ICC).Piscataway,NJ:IEEE, 2019: 1−6 [12] Biggio B, Nelson B, Laskov P. Poisoning Attacks against Support Vector Machines[EB/OL]. (2013-03-25)[2021-12-02].https://arxiv.org/abs/1206.6389 [13] 王健宗,孔令炜,黄章成,等.联邦学习隐私保护研究进展[J].大数据,2021,7(3):130-149 [14] 李丽萍.基于模型聚合的分布式拜占庭鲁棒优化算法研究[D].合肥:中国科学技术大学,2020 [15] Fang M , Cao X , Jia J , et al. Local Model Poisoning Attacks to Byzantine-Robust Federated Learning[EB/OL].(2021-09-21)[2021-12-02].https://arxiv.org/abs/1911.11815 [16]RigakiM , Garcia S . A Survey of Privacy Attacks in Machine Learning[EB/OL]. (2021-07-15)[2021-12-02].https://arxiv.org/abs/1910.12366v2 [17]Kalpesh K, Gaurav ST, Ankur P,et al. Thieves on Sesame Street! Model Extraction of BERT-based APIs[EB/OL]. (2020-10-12)[2021-12-05].https://arxiv.org/abs/2007.07646v2 [18]Milli S, Schmidt L, Dragan A D, et al. Model Reconstruction from Model Explanations[C]//Proc of 2018 Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, New York:ACM,2018:1-9 [19]TribhuvaneshOrekondy, BerntSchiele, and Mario Fritz. Knockoff nets: Stealing functionality of black-box models[C]//Procof 2019 IEEE Conference on Computer Vision and Pattern Recognition. Long Beach.Piscataway,NJ:IEEE, 2019:4954–4963. [20]Kariyappa S, Prakash A, Qureshi M. MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation[EB/OL]. (2020-05-06)[2021-12-09].https://arxiv.org/abs/2005.03161v1 [21]Tramèr F, Zhang Fan, Ari J, et al. Stealing Machine Learning Models via Prediction APIs[C] //Procof the 25th USENIX Security Symposium. Austin, TX:USENIXSecurity, 2016:601–618 [22]Oh S J, Augustin M, SchieleB, et al. Towards Reverse Engineering Black-Box Neural Networks[EB/OL]. (2018-01-14)[2021-12-09].https://arxiv.org/abs/1711.01768 [23]Fredrikson M, Jha S, Ristenpart T. Model inversion attacks that exploit confidence information and basic countermeasures[C]//Proc of the 22nd ACM SIGSAC Conference on Computer and Communications Security.New York:ACM, 2015: 1322-1333 [24]Ateniese G, Mancini L V, Spognardi A, et al. Hacking smart machines with smarter ones[J]. International Journal of Security & Networks, 2015, 10(3):137-137 [25]Backes M, Berrang P, Humbert M, et al. Membership Privacy in MicroRNA-based Studies[C]// ACMSigsac Conference on Computer & Communications Security.New York:ACM, 2016:319-330 [26]Shokri R, Song L, Mittal P. Membership Inference Attacks Against Adversarially Robust Deep Learning Models[C]// Proc of 2019 IEEE Security and Privacy Workshops (SPW).Piscataway,NJ:IEEE, 2019:50-56 [27]Salem A, Zhang Y, Humbert M, et al.Ml-leaks: Model and data independent membership inference attacks and defenses on machine learningmodels[EB/OL]. (2018-10-14)[2021-12-11]. https://arxiv.org/abs/1806.01246v2 [28]Li Z , Zhang Y. Label-Leaks: Membership Inference Attack with Label[EB/OL]. (2021-09-21)[2021-12-13]. https://arxiv.org/abs/2007.15528v1 [29]Choo C, Tramer F, Carlini N, et al. Label-Only Membership Inference Attacks[EB/OL]. (2021-10-05)[2021-12-13]. https://arxiv.org/abs/2007.14321 [30]Hayes J, Melis L, Danezis G, et al. LOGAN: Membership Inference Attacks Against Generative Models[J]. Proceedings on Privacy Enhancing Technologies, 2019, 2019(1):133-152 [31]Chen Dingfan, Yu Ning, Zhang Yang, et al.GAN-Leaks: A Taxonomy of Membership Inference Attacks against GANs[EB/OL]. (2020-09-23)[2021-12-13].https://arxiv.org/abs/1909.03935v1 [32]Hilprecht B, Hrterich M, Bernau D, et al. Monte carlo and reconstruction membership inference attacks against generative models[J]. Proceedings on Privacy Enhancing Technologies,2019,2019 (4): 232-249 [33]He Yingzhe, Hu Xingbo, He Jinwen, et al. Overview of privacy and security issues in machine learning systems [J]. Computer research and development, 2019, 56 (10): 2049 – 2070 [34]Giuseppe A, Luigi V M, Angelo S, et al. Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers[EB/OL]. (2020-09-23)[2021-12-13]. https://arxiv.org/abs/1306.4447v1 [35]Ganju K, Wang Q, Yang W, et al. Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations[C]// Proc of 2018 ACMSigsac Conference.New York: ACM, 2018:619-633 [36]Wang Zhibo, Song Mengkai, Zhang Zhifei, et al. Beyond inferring class representatives: User-level privacy leakage from federated learning[C]//Proc of 2019 IEEE INFOCOM 2019-IEEE Conference on Computer Communications. Piscataway,NJ:IEEE, 2019:2512–2520 [37]Hitaj B, Ateniese G, Perez-Cruz F. Deep models under the GAN: information leakage from collaborative deep learning[C]//Proc of 2017 ACM SIGSAC Conference on Computer and Communications Security.New York:ACM, 2017: 603–618 [38]AonoY,Hayashi T, Wang Lihua, et al. Privacy preserving deep learning: Revisited and enhanced[EB/OL]. (2020-9-23)[2021-12-13].https://link.springer.com/chapter/10.1007/978-981-10-5421-1_9 [39]MelisL, Song Congzheng, CristofaroE D, et al. Exploiting unintended feature leakage in collaborative learning[C]//Proc of the 40th IEEE Symposium on Security and Privacy (SP).Piscataway,NJ:IEEE, 2019: 691–706. [40]AlufaisanY,Kantarcioglu M, and Zhou Y. Robust Transparency Against Model Inversion Attacks[J] IEEE Transactions on Dependable and Secure Computing, 2021,18(5):2061-2073 [41]Wang Zhibo, Song Mengkai, Zhang Zhifei, et al. Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning[C]//Proc of IEEE Conference on Computer Communications.Piscataway,NJ: IEEE, 2019:2512–2520. [42]Yang Ziqi, Zhang Jiyi, ChangE, et al. Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment[C]//Proc of the 2019 ACM SIGSAC Conference on Computer and Communications Security (CCS 2019).New York:ACM, 2019:225-240 [43]Zhang Yuheng, JiaRuoxi,PeiHengzhi, et al. The secret revealer: generative model-inversion attacks against deep neural networks[C]//Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway,NJ:IEEE, 2020:253–261. [44]FredriksonM, Lantz E, Jha S,et al. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing[C]//Proc of 23rd USENIX Security Symposium (USENIX Security 14). San Diego:USENIX ,2014:17–32 [45]Hidano S, Murakami T, Katsumata S, et al. Model Inversion Attacks for Prediction Systems: Without Knowledge of Non-Sensitive Attributes[C]//Proc of 15th Annual Conference on Privacy, Security and Trust (PST). Piscataway,NJ:IEEE, 2017:115–11509 [46]Hitaj B, AtenieseG,Perez-Cruz F. Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning[C]//Proc of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS). New York:ACM, 2017:603-618 [47]Salem A, Bhattacharya A, BackesM, et al.Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning[C]//Proc of the29thUSENIX Security Symposium (USENIX Security). San Diego:USENIX, 2020:1-13 [48]ArjovskyM,BottouL. Towards principled methods for training generative adversarial networks[C]//Proc of the 5th International Conference on Learning Representations (ICLR).Toulon,France:ICLR, 2017:1-17 [49]Goodfellow I J. On distinguishability criteria for estimating generative models[EB/OL]. (2015-05-21)[2021-12-15]. https://arxiv.org/abs/1412.6515 [50]MeschederL ,Nowozin S, Geiger A. Adversarial Variational Bayes: Unifying VariationalAutoencoders and Generative Adversarial Networks[EB/OL]. (2018-06-11)[2021-12-15].https://arxiv.org/abs/1701.04722v4 [51]Radford A, Metz L, ChintalaS. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks [EB/OL].(2016-01-07)[2021-12-10].https://arxiv.org/abs/1511.06434v1 [52]SalimansT, GoodfellowI J, Zaremba W, et al. Improved techniques for training gans[C]//Proc of the10thAdvances in Neural Information Processing Systems. Barcelona:NIPS, 2016:2226–2234 [53]GoodfellowIJ,Pouget-AbadieJ,MirzaM, et al.Generative Adversarial Nets[M].Massachusetts: MIT Press, 2014 [54]Doersch C. Tutorial on VariationalAutoencoders[EB/OL]. (2021-01-03)[2021-12-08].https://arxiv.org/abs/1606.05908 [55]Zhao B,Mopuri K R, Bilen H. iDLG: Improved Deep Leakage from Gradients[EB/OL]. (2020-01-08)[2021-12-08].https://arxiv.org/abs/2001.02610 [56]BiggioB, Corona I,Maiorca D, et al.Evasion Attacks against Machine Learning at Test Time[C]//Proc of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD). Riva del Garda, Italy:Springer, 2013:387–402 [57]Wang Y, Deng J, Guo D, et al. SAPAG: A Self-Adaptive Privacy Attack From Gradients[EB/OL]. (2020-09-14)[2021-12-08].https://arxiv.org/abs/2009.06228v1 [58]Kwon H, Yoon H, Park K W. Selective Poisoning Attack on Deep Neural Networks[J]. Symmetry, 2019, 11(7):892-892 [59]CarliniN ,Wagner D.Towards Evaluating the Robustness of Neural Networks[C]//Proc of the 38th IEEE Symposium on Security and Privacy (S&P). SAN JOSE, CA:IEEE, 2017: 39–57 [60]PapernotN, McDaniel P D,Goodfellow I J, et al, Practical Black-Box Attacks Against Machine Learning[C]//Proc of the ACM Asia Conference on Computer and Communications Security (ASIACCS). New York:ACM, 2017: 506–519 [61]ShafahiA, Huang W R,Najibi M, etal.Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks[C]//Proc of the Annual Conference on Neural Information Processing Systems (NeurIPS). Montreal, Quebec:NeurIPS, 2018: 6103–6113 [62]TramerF,Kurakin A, PapernotN, et al.Ensemble Adversarial Training: Attacks and Defenses[C]//Proc of the International Conference on Learning Representations (ICLR), Toulon, France:ICLR, 2017:1-20 [63]PapernotN, McDaniel P, Sinha A, et al. Wellman. Towards the science of security and privacy in machine learning[EB/OL]. (2016-11-11)[2021-12-08].https://arxiv.org/abs/1611.03814 [64]Akhtar N, Mian A. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey[J]. IEEE Access, 2018, 2018(6):14410-14430 [65]Chen Xinyun, Liu Chang, LiBo,et al. Targeted backdoor attacks on deep learning systems using data poisoning[EB/OL]. (2017-11-15)[2021-12-04].https://arxiv.org/abs/1712.05526v1 [66]BagdasaryanE, VeitA, Hua Yiqing, et al. How to backdoor federated learning[EB/OL]. (2019-08-06)[2021-12-04].https://arxiv.org/abs/1807.00459 [67]Fung C, Yoon C J M, BeschastnikhI. Mitigating sybils in federated learning poisoning[EB/OL]. (2020-07-15)[2021-12-03].https://arxiv.org/abs/1808.04866v5 [68]Douceur J R. The Sybil Attack[J]. Springer, 2002, 2429(2002): 251–260 [69]Yin Dong, Chen Yudong, Ramchandran K, et al. Byzantine-robust distributed learning: Towards optimal statistical rates[EB/OL]. (2021-02-25)[2021-12-05]. https://arxiv.org/abs/1803.01498 [70] Chen Xiaoyi, Salem A, BackesM,et al. BadNL: Backdoor Attacks Against NLP Models [EB/OL]. (2021-10-04)[2021-12-06].https://arxiv.org/abs/2006.01043v1 [71]Salem A, Sautter Y, Backes M, et al. Dynamic Backdoor Attacks Against Machine Learning Models[EB/OL]. (2020-03-07)[2021-12-06].https://arxiv.org/abs/2003.03675 [72]Sun Z,Kairouz P, Suresh A T, et al. Can You Really Backdoor Federated Learning?[EB/OL]. (2019-11-02)[2021-12-07].https://arxiv.org/abs/1911.07963 |
[1] | . [J]. Journal of Information Security Reserach, 2022, 8(3): 270-. |
[2] | . AI and Data Privacy Protection: The Way to Federated Learning [J]. Journal of Information Security Research, 2019, 5(11): 961-965. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||