|
Privacypreserving Federated Learning Research Based on #br# Confused Modulo Projection Homomorphic Encryption#br#
Journal of Information Security Reserach
2025, 11 (3):
198-.
In the current era of big data, deep learning is booming and has become a powerful tool for solving realworld problems. However, traditional centralized deep learning systems are at risk of privacy leakage. To address this problem, federated learning, a distributed machine learning approach, has emerged. Federated learning allows multiple organizations or individuals to train models together without sharing raw data, by uploading local model parameters to the server, aggregating each user’s parameters to construct a global model, and returning it to the user. This approach achieves global optimization and avoids private data leakage. However, even with federated learning, attackers may still be able to reconstruct user data by obtaining the model parameters uploaded by users, thus violating privacy. To address this issue, privacy protection has become the focus of federated learning research. In this paper, we propose a federated learning scheme FLFC (federated learning with confused modulo projection homomorphic encryption) based on confused modulo projection homomorphic encryption to address the above issues. This scheme adopts a selfdeveloped modular fully homomorphic encryption algorithm to encrypt user model parameters. The modular fully homomorphic encryption algorithm has the advantages of high computational efficiency, support for floatingpoint operations, and localization, thus achieving stronger protection of privacy. Experimental results show that the FLFC scheme exhibits a higher average accuracy and good stability compared to the FedAvg scheme in experiments.
Reference |
Related Articles |
Metrics
|
|