Federated Learning Security

Trending
5papers
6.0viability
+100%30d

State of the Field

Recent research in federated learning security is increasingly focused on enhancing resilience against adversarial attacks while maintaining privacy and efficiency. One promising direction involves integrating blockchain technology to create active defense mechanisms that bolster model integrity and data confidentiality. This approach not only addresses vulnerabilities inherent in decentralized training but also allows for adaptive responses to various attack strategies. Concurrently, the emergence of sophisticated adversarial techniques, such as distributed attacks that exploit structural nuances in model architectures, underscores the need for more nuanced defenses. Additionally, the looming threat of quantum computing has prompted the development of post-quantum cryptographic frameworks to safeguard collaborative threat intelligence sharing. These advancements signal a shift toward more robust, scalable solutions that prioritize both security and operational efficiency, making federated learning a more viable option for industries reliant on sensitive data, such as healthcare and finance.

Last updated Mar 15, 2026

Papers

1–5 of 5
Research Paper·Feb 25, 2026

Resilient Federated Chain: Transforming Blockchain Consensus into an Active Defense Layer for Federated Learning

Federated Learning (FL) has emerged as a key paradigm for building Trustworthy AI systems by enabling privacy-preserving, decentralized model training. However, FL is highly susceptible to adversarial...

7.0 viability
Research Paper·Mar 8, 2026

Hide and Find: A Distributed Adversarial Attack on Federated Graph Learning

Federated Graph Learning (FedGL) is vulnerable to malicious attacks, yet developing a truly effective and stealthy attack method remains a significant challenge. Existing attack methods suffer from lo...

7.0 viability
Research Paper·Mar 8, 2026

Post-quantum Federated Learning: Secure And Scalable Threat Intelligence For Collaborative Cyber Defense

Collaborative threat intelligence via federated learning (FL) faces critical risks from quantum computing, which can compromise classical encryption methods. This study proposes a quantum-secure FL fr...

7.0 viability
Research Paper·Mar 11, 2026

Repurposing Backdoors for Good: Ephemeral Intrinsic Proofs for Verifiable Aggregation in Cross-silo Federated Learning

While Secure Aggregation (SA) protects update confidentiality in Cross-silo Federated Learning, it fails to guarantee aggregation integrity, allowing malicious servers to silently omit or tamper with ...

7.0 viability
Research Paper·Mar 4, 2026

Structure-Aware Distributed Backdoor Attacks in Federated Learning

While federated learning protects data privacy, it also makes the model update process vulnerable to long-term stealthy perturbations. Existing studies on backdoor attacks in federated learning mainly...

2.0 viability