State of the Field
Recent advancements in federated learning are addressing critical challenges in model training across decentralized environments, particularly focusing on privacy and efficiency. New frameworks like FLoRG and LA-LoRA are optimizing fine-tuning processes for large models by minimizing communication overhead and enhancing convergence under privacy constraints. Meanwhile, FedPSA introduces a nuanced approach to asynchronous federated learning, improving performance by dynamically adjusting for model staleness. The emergence of heterogeneity-aware methods, such as FedRD and FedDis, is also noteworthy, as they tackle the complexities of diverse client data, ensuring robust model generalization. Additionally, the introduction of DP-FedAdamW highlights a shift toward more effective optimization strategies in differentially private settings. Collectively, these developments not only enhance the performance of federated learning systems but also hold promise for commercial applications in sectors like healthcare and finance, where data privacy and efficient collaboration are paramount.
Papers
1–10 of 19FedPSA: Modeling Behavioral Staleness in Asynchronous Federated Learning
Asynchronous Federated Learning (AFL) has emerged as a significant research area in recent years. By not waiting for slower clients and executing the training process concurrently, it achieves faster ...
FedNSAM:Consistency of Local and Global Flatness for Federated Learning
In federated learning (FL), multi-step local updates and data heterogeneity usually lead to sharper global minima, which degrades the performance of the global model. Popular FL algorithms integrate s...
FLoRG: Federated Fine-tuning with Low-rank Gram Matrices and Procrustes Alignment
Parameter-efficient fine-tuning techniques such as low-rank adaptation (LoRA) enable large language models (LLMs) to adapt to downstream tasks efficiently. Federated learning (FL) further facilitates ...
Conformalized Neural Networks for Federated Uncertainty Quantification under Dual Heterogeneity
Federated learning (FL) faces challenges in uncertainty quantification (UQ). Without reliable UQ, FL systems risk deploying overconfident models at under-resourced agents, leading to silent local fail...
FedRD: Reducing Divergences for Generalized Federated Learning via Heterogeneity-aware Parameter Guidance
Heterogeneous federated learning (HFL) aims to ensure effective and privacy-preserving collaboration among different entities. As newly joined clients require significant adjustments and additional tr...
FedDAG: Clustered Federated Learning via Global Data and Gradient Integration for Heterogeneous Environments
Federated Learning (FL) enables a group of clients to collaboratively train a model without sharing individual data, but its performance drops when client data are heterogeneous. Clustered FL tackles ...
CA-AFP: Cluster-Aware Adaptive Federated Pruning
Federated Learning (FL) faces major challenges in real-world deployments due to statistical heterogeneity across clients and system heterogeneity arising from resource-constrained devices. While clust...
Rethinking LoRA for Privacy-Preserving Federated Learning in Large Models
Fine-tuning large vision models (LVMs) and large language models (LLMs) under differentially private federated learning (DPFL) is hindered by a fundamental privacy-utility trade-off. Low-Rank Adaptati...
Wireless Federated Multi-Task LLM Fine-Tuning via Sparse-and-Orthogonal LoRA
Decentralized federated learning (DFL) based on low-rank adaptation (LoRA) enables mobile devices with multi-task datasets to collaboratively fine-tune a large language model (LLM) by exchanging local...
Toward a Sustainable Federated Learning Ecosystem: A Practical Least Core Mechanism for Payoff Allocation
Emerging network paradigms and applications increasingly rely on federated learning (FL) to enable collaborative intelligence while preserving privacy. However, the sustainability of such collaborativ...