Recent advancements in federated learning are addressing critical challenges in model training across heterogeneous environments, enhancing both efficiency and performance. Asynchronous federated learning techniques are gaining traction, with new frameworks that dynamically adjust for model staleness, thereby improving training speed without sacrificing accuracy. Parameter-efficient fine-tuning methods are being refined to minimize communication overhead and ensure robust updates across distributed clients, particularly in applications involving large language models. Additionally, innovative approaches are being developed to harmonize local and global model characteristics, tackling issues of optimization divergence and knowledge interference. These developments are particularly relevant for industries where data privacy is paramount, such as healthcare and finance, as they enable collaborative model training without compromising sensitive information. The field is increasingly focused on creating frameworks that are not only technically sound but also practically applicable, paving the way for broader adoption in real-world scenarios.
Top papers
- FedNSAM:Consistency of Local and Global Flatness for Federated Learning(7.0)
- FLoRG: Federated Fine-tuning with Low-rank Gram Matrices and Procrustes Alignment(7.0)
- FedPSA: Modeling Behavioral Staleness in Asynchronous Federated Learning(7.0)
- FedRD: Reducing Divergences for Generalized Federated Learning via Heterogeneity-aware Parameter Guidance(6.0)
- Federated Causal Discovery Across Heterogeneous Datasets under Latent Confounding(6.0)
- FedAFD: Multimodal Federated Learning via Adversarial Fusion and Distillation(6.0)
- FedBCD:Communication-Efficient Accelerated Block Coordinate Gradient Descent for Federated Learning(6.0)
- Conformalized Neural Networks for Federated Uncertainty Quantification under Dual Heterogeneity(6.0)
- FedDAG: Clustered Federated Learning via Global Data and Gradient Integration for Heterogeneous Environments(6.0)
- Wireless Federated Multi-Task LLM Fine-Tuning via Sparse-and-Orthogonal LoRA(6.0)
- CA-AFP: Cluster-Aware Adaptive Federated Pruning(6.0)
- Rethinking LoRA for Privacy-Preserving Federated Learning in Large Models(6.0)
- Noise-aware Client Selection for carbon-efficient Federated Learning via Gradient Norm Thresholding(5.0)
- FedDis: A Causal Disentanglement Framework for Federated Traffic Prediction(5.0)
- Toward Enhancing Representation Learning in Federated Multi-Task Settings(5.0)
- Toward a Sustainable Federated Learning Ecosystem: A Practical Least Core Mechanism for Payoff Allocation(5.0)
- DP-FedAdamW: An Efficient Optimizer for Differentially Private Federated Large Models(5.0)
- FedVG: Gradient-Guided Aggregation for Enhanced Federated Learning(5.0)
- Towards Performance-Enhanced Model-Contrastive Federated Learning using Historical Information in Heterogeneous Scenarios(4.0)
- Trust-Based Incentive Mechanisms in Semi-Decentralized Federated Learning Systems(3.0)
- On the Sensitivity of Firing Rate-Based Federated Spiking Neural Networks to Differential Privacy(3.0)
- Hybrid Federated Learning for Noise-Robust Training(2.0)