PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $10K - $14K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (20)

[1]
Federated Learning for IoT: A Survey of Techniques, Challenges, and Applications
2025Elias Dritsas, Maria Trigka
[2]
FLARE: A Backdoor Attack to Federated Learning with Refined Evasion
2024Qingya Wang, Yi Wu et al.
[3]
Backdoor Federated Learning by Poisoning Backdoor-Critical Layers
2023Haomin Zhuang, Mingxian Yu et al.
[4]
Oblivion: Poisoning Federated Learning by Inducing Catastrophic Forgetting
2023Chen Zhang, Boyang Zhou et al.
[5]
3DFed: Adaptive and Extensible Framework for Covert Backdoor Attack in Federated Learning
2023Haoyang Li, Qingqing Ye et al.
[6]
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients
2022Zaixi Zhang, Xiaoyu Cao et al.
[7]
FLAME: Taming Backdoors in Federated Learning
2022T. D. Nguyen, P. Rieger et al.
[8]
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
2020Xiaoyu Cao, Minghong Fang et al.
[9]
Defending Against Backdoors in Federated Learning with Robust Learning Rate
2020Mustafa Safa Ozdayi, Murat Kantarcioglu et al.
[10]
DBA: Distributed Backdoor Attacks against Federated Learning
2020Chulin Xie, Keli Huang et al.
[11]
The Limitations of Federated Learning in Sybil Settings
2020Clement Fung, Chris J. M. Yoon et al.
[12]
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
2019Minghong Fang, Xiaoyu Cao et al.
[13]
A Little Is Enough: Circumventing Defenses For Distributed Learning
2019Moran Baruch, Gilad Baruch et al.
[14]
How To Backdoor Federated Learning
2018Eugene Bagdasarian, Andreas Veit et al.
[15]
Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates
2018Dong Yin, Yudong Chen et al.
[16]
Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent
2017Peva Blanchard, El Mahdi El Mhamdi et al.
[17]
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
2017Tianyu Gu, Brendan Dolan-Gavitt et al.
[18]
Communication-Efficient Learning of Deep Networks from Decentralized Data
2016H. B. McMahan, Eider Moore et al.
[19]
Deep Residual Learning for Image Recognition
2015Kaiming He, X. Zhang et al.
[20]
Very Deep Convolutional Networks for Large-Scale Image Recognition
2014K. Simonyan, Andrew Zisserman

Founder's Pitch

"Develop robust defenses in Federated Learning to mitigate layer-specific backdoor attacks like LSA."

SecurityScore: 4View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

1/4 signals

2.5

Quick Build

1/4 signals

2.5

Series A Potential

1/4 signals

2.5

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 2/16/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.