Federated Learning

Trending
19papers
5.2viability
+180%30d

State of the Field

Recent advancements in federated learning are addressing critical challenges in model training across decentralized environments, particularly focusing on privacy and efficiency. New frameworks like FLoRG and LA-LoRA are optimizing fine-tuning processes for large models by minimizing communication overhead and enhancing convergence under privacy constraints. Meanwhile, FedPSA introduces a nuanced approach to asynchronous federated learning, improving performance by dynamically adjusting for model staleness. The emergence of heterogeneity-aware methods, such as FedRD and FedDis, is also noteworthy, as they tackle the complexities of diverse client data, ensuring robust model generalization. Additionally, the introduction of DP-FedAdamW highlights a shift toward more effective optimization strategies in differentially private settings. Collectively, these developments not only enhance the performance of federated learning systems but also hold promise for commercial applications in sectors like healthcare and finance, where data privacy and efficient collaboration are paramount.

Last updated Feb 26, 2026

Papers

1–10 of 19
Research Paper·Feb 17, 2026

FedPSA: Modeling Behavioral Staleness in Asynchronous Federated Learning

Asynchronous Federated Learning (AFL) has emerged as a significant research area in recent years. By not waiting for slower clients and executing the training process concurrently, it achieves faster ...

7.0 viability
Research Paper·Feb 27, 2026

FedNSAM:Consistency of Local and Global Flatness for Federated Learning

In federated learning (FL), multi-step local updates and data heterogeneity usually lead to sharper global minima, which degrades the performance of the global model. Popular FL algorithms integrate s...

7.0 viability
Research Paper·Feb 19, 2026

FLoRG: Federated Fine-tuning with Low-rank Gram Matrices and Procrustes Alignment

Parameter-efficient fine-tuning techniques such as low-rank adaptation (LoRA) enable large language models (LLMs) to adapt to downstream tasks efficiently. Federated learning (FL) further facilitates ...

7.0 viability
Research Paper·Feb 26, 2026

Conformalized Neural Networks for Federated Uncertainty Quantification under Dual Heterogeneity

Federated learning (FL) faces challenges in uncertainty quantification (UQ). Without reliable UQ, FL systems risk deploying overconfident models at under-resourced agents, leading to silent local fail...

6.0 viability
Research Paper·Jan 28, 2026

FedRD: Reducing Divergences for Generalized Federated Learning via Heterogeneity-aware Parameter Guidance

Heterogeneous federated learning (HFL) aims to ensure effective and privacy-preserving collaboration among different entities. As newly joined clients require significant adjustments and additional tr...

6.0 viability
Research Paper·Feb 26, 2026

FedDAG: Clustered Federated Learning via Global Data and Gradient Integration for Heterogeneous Environments

Federated Learning (FL) enables a group of clients to collaboratively train a model without sharing individual data, but its performance drops when client data are heterogeneous. Clustered FL tackles ...

6.0 viability
Research Paper·Mar 2, 2026

CA-AFP: Cluster-Aware Adaptive Federated Pruning

Federated Learning (FL) faces major challenges in real-world deployments due to statistical heterogeneity across clients and system heterogeneity arising from resource-constrained devices. While clust...

6.0 viability
Research Paper·Feb 23, 2026

Rethinking LoRA for Privacy-Preserving Federated Learning in Large Models

Fine-tuning large vision models (LVMs) and large language models (LLMs) under differentially private federated learning (DPFL) is hindered by a fundamental privacy-utility trade-off. Low-Rank Adaptati...

6.0 viability
Research Paper·Feb 24, 2026

Wireless Federated Multi-Task LLM Fine-Tuning via Sparse-and-Orthogonal LoRA

Decentralized federated learning (DFL) based on low-rank adaptation (LoRA) enables mobile devices with multi-task datasets to collaboratively fine-tune a large language model (LLM) by exchanging local...

6.0 viability
Research Paper·Feb 3, 2026·B2BHealthcare

Toward a Sustainable Federated Learning Ecosystem: A Practical Least Core Mechanism for Payoff Allocation

Emerging network paradigms and applications increasingly rely on federated learning (FL) to enable collaborative intelligence while preserving privacy. However, the sustainability of such collaborativ...

5.0 viability
Page 1 of 2