|
Research on Privacy Protection Technology in Federated Learning
Journal of Information Security Reserach
2024, 10 (3):
194-.
In federated learning, multiple models are trained through parameter coordination without sharing raw data. However, the extensive parameter exchange in this process renders the model vulnerable to threats not only from external users but also from internal participants. Therefore, research on privacy protection techniques in federated learning is crucial. This paper introduces the current research status on privacy protection in federated learning. It classifies the security threats of federated learning into external attacks and internal attacks.Based on this classification, it summarizes external attack techniques such as model inversion attacks, external reconstruction attacks, and external inference attacks, as well as internal attack techniques such as poisoning attacks, internal reconstruction attacks, and internal inference attacks. From the perspective of attack and defense correspondence, this paper summarizes data perturbation techniques such as central differential privacy, local differential privacy, and distributed differential privacy, as well as process encryption techniques such as homomorphic encryption, secret sharing, and trusted execution environment. Finally, the paper analyzes the difficulties of federated learning privacy protection technology and identifies the key directions for its improvement.
Reference |
Related Articles |
Metrics
|
|