[1]Li T, Sahu A K, Talwalkar A, et al. Federated learning: Challenges, methods, and future directions[J]. IEEE Signal Processing Magazine, 2020, 37(3): 5060[2]McMahan B, Moore E, Ramage D, et al. Communicationefficient learning of deep networks from decentralized data[C] Artificial Intelligence and Statistics. New York: PMLR, 2017: 12731282[3]肖雄, 唐卓, 肖斌, 等. 联邦学习的隐私保护与安全防御研究综述[J]. 计算机学报, 2023, 46(5): 10191044[4]Dwork C. Differential privacy[C] Proc of Int Colloquium on Automata, Languages, and Programming. Berlin: Springer, 2006: 112[5]Yao A C C. How to generate and exchange secrets[C] Proc of the 27th Annual Symp on Foundations of Computer Science (SFCS 1986). Piscataway, NJ: IEEE, 1986: 162167[6]熊维, 王海洋, 唐祎飞, 等. 安全多方计算应用的隐私度量方法[J]. 信息安全研究, 2024, 10(1): 611[7]Liu B, Ding M, Shaham S, et al. When machine learning meets privacy: A survey and outlook[J]. ACM Computing Surveys, 2021, 54(2): 136[8]钱文君, 沈晴霓, 吴鹏飞, 等. 大数据计算环境下的隐私保护技术研究进展[J]. 计算机学报, 2022, 45(4): 669701[9]冯登国. 机密计算发展现状与趋势[J]. 信息安全研究, 2024, 10(1): 25[10]Mothukuri V, Parizi R M, Pouriyeh S, et al. A survey on security and privacy of federated learning[J]. Future Generation Computer Systems, 2021, 115: 619640[11]周俊, 方国英, 吴楠. 联邦学习安全与隐私保护研究综述[J]. 西华大学学报: 自然科学版, 2020, 39(4): 917[12]陈兵, 成翔, 张佳乐, 等. 联邦学习安全与隐私保护综述[J]. 南京航空航天大学学报, 2020, 52(5): 675684[13]顾育豪, 白跃彬. 联邦学习模型安全与隐私研究进展[J]. 软件学报, 2023, 34(6): 28332864[14]汤凌韬, 陈左宁, 张鲁飞, 等. 联邦学习中的隐私问题研究进展[J]. 软件学报, 2023, 34(1): 197229[15]Wang R, Lai J, Zhang Z, et al. Privacypreserving federated learning for Internet of medical things under edge computing[J]. IEEE Journal of Biomedical and Health Informatics, 2022, 27(2): 854865[16]Yang Q, Liu Y, Chen T, et al. Federated machine learning: Concept and applications[J]. ACM Trans on Intelligent Systems and Technology, 2019, 10(2): 119[17]刘艺璇, 陈红, 刘宇涵, 等. 联邦学习中的隐私保护技术[J]. 软件学报, 2022, 33(3): 10571092[18]Yang Z, Zhang J, Chang E C, et al. Neural network inversion in adversarial setting via background knowledge alignment[C] Proc of the 2019 ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2019: 225240[19]Fredrikson M, Jha S, Ristenpart T. Model inversion attacks that exploit confidence information and basic countermeasures[C] Proc of the 22nd ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2015: 13221333[20]Fredrikson M, Lantz E, Jha S, et al. Privacy in pharmacogenetics: An endtoend case study of personalized warfarin dosing[C] Proc of the 23rd USENIX Security Symp (USENIX Security 14). Berkeley, CA: USENIX Association, 2014: 1732[21]Shokri R, Stronati M, Song C, et al. Membership inference attacks against machine learning models[C] Proc of 2017 IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2017: 318[22]Yang M, Cheng H, Chen F, et al. Model poisoning attack in differential privacybased federated learning[J]. Information Sciences, 2023, 630: 158172[23]Biggio B, Nelson B, Laskov P. Poisoning attacks against support vector machines[J]. arXiv preprint, arXiv:1206.6389, 2012[24]Jiang W, Li H, Liu S, et al. A flexible poisoning attack against machine learning[C] Proc of 2019 IEEE Int Conf on Communications (ICC 2019). Piscataway, NJ: IEEE, 2019: 16[25]Bagdasaryan E, Veit A, Hua Y, et al. How to backdoor federated learning[C] Proc of Int Conf on Artificial Intelligence and Statistics. New York: PMLR, 2020: 29382948[26]Hitaj B, Ateniese G, PerezCruz F. Deep models under the GAN: Information leakage from collaborative deep learning[C] Proc of the 2017 ACM SIGSAC Conf on Computer and Communications Security. Piscataway, NJ: IEEE, 2017: 603618[27]Wang Z, Song M, Zhang Z, et al. Beyond inferring class representatives: Userlevel privacy leakage from federated learning[C] Proc of IEEE Conf on Computer Communications (IEEE INFOCOM 2019). Piscataway, NJ: IEEE, 2019: 25122520[28]Zhu L, Liu Z, Han S. Deep leakage from gradients[C] Proc of Advances in Neural Information Processing Systems (NeurIPS 2019). Cambridge: MIT Press, 2019: 1477414784[29]Zhao B, Mopuri K R, Bilen H. iDLG: Improved deep leakage from gradients[J]. arXiv preprint, arXiv:2001.02610, 2020[30]Melis L, Song C, De Cristofaro E, et al. Exploiting unintended feature leakage in collaborative learning[C] Proc of 2019 IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2019: 691706[31]Zhang W R, Tople S, Ohrimenko O. Leakage of dataset properties in MultiParty machine learning[C] Proc of the 30th USENIX Security Symp. Berkeley, CA: USENIX Association, 2021: 26872704[32]Nasr M, Shokri R, Houmansadr A. Comprehensive privacy analysis of deep learning: Passive and active whitebox inference attacks against centralized and federated learning[C] Proc of 2019 IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2019: 739753[33]Naseri M, Hayes J, De Cristofaro E. Local and central differential privacy for robustness and privacy in federated learning[J]. arXiv preprint, arXiv:2009.03561, 2020[34]叶青青, 孟小峰, 朱敏杰, 等. 本地化差分隐私研究综述[J]. 软件学报, 2018, 29(7): 19812005[35]Cormode G, Jha S, Kulkarni T, et al. Privacy at scale: Local differential privacy in practice[C] Proc of the 2018 Int Conf on Management of Data. 2018: 16551658[36]王雷霞, 孟小峰. ESA: 一种新型的隐私保护框架[J]. 计算机研究与发展, 2022, 59(1): 144171[37]Truex S, Baracaldo N, Anwar A, et al. A hybrid approach to privacypreserving federated learning[C] Proc of the 12th ACM Workshop on Artificial Intelligence and Security. New York: ACM, 2019: 111[38]余晟兴, 陈钟. 基于同态加密的高效安全联邦学习聚合框架[J]. 通信学报, 2023, 44(1): 1428[39]Shamir A. How to share a secret[J]. Communications of the ACM, 1979, 22(11): 612613[40]Carlini N, Liu C, Kos J, et al. The secret sharer: Measuring unintended neural network memorization & extracting secrets[J]. arXiv preprint, arXiv:1802.08232, 2018[41]Bonawitz K, Ivanov V, Kreuter B, et al. Practical secure aggregation for privacypreserving machine learning[C] Proc of the 2017 ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2017: 11751191[42]Subramanyan P, Sinha R, Lebedev I, et al. A formal foundation for secure remote execution of enclaves[C] Proc of the 2017 ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2017: 24352450[43]Zhang X, Kang Y, Chen K, et al. Trading off privacy, utility, and efficiency in federated learning[J]. ACM Trans on Intelligent Systems and Technology, 2023, 14(6): 132[44]Chen H, Zhu T, Zhang T, et al. Privacy and fairness in federated learning: On the perspective of tradeoff[J]. ACM Computing Surveys, 2023, 56(2): 137[45]WarnatHerresthal S, Schultze H, Shastry K L, et al. Swarm learning for decentralized and confidential clinical machine learning[J]. Nature, 2021, 594(7862): 265270
|