Federated Learning Comparison Hub
25 papers - avg viability 5.3
Recent advances in federated learning are focused on enhancing model performance and efficiency while addressing privacy concerns and data heterogeneity. New frameworks like HeteroFedSyn and FairFAL are tackling the challenges of data synthesis and active learning in heterogeneous environments, allowing for better data utility and class balance across clients. Innovations in fine-tuning methods, such as Stabilized Federated LoRA and FLoRG, are improving the stability and communication efficiency of adapting large language models in distributed settings. Asynchronous federated learning is also evolving, with approaches like FedPSA and FedBCD introducing more nuanced measures of staleness and communication efficiency, respectively, to mitigate the drawbacks of client variability. These developments not only enhance the robustness of federated systems but also promise to reduce operational costs and improve the scalability of machine learning applications across industries, from healthcare to finance, where data privacy is paramount.
Top Papers
- HeteroFedSyn: Differentially Private Tabular Data Synthesis for Heterogeneous Federated Settings(8.0)
HeteroFedSyn is a framework for differentially private tabular data synthesis in heterogeneous federated settings, enabling secure data sharing for various tasks.
- Federated Active Learning Under Extreme Non-IID and Global Class Imbalance(8.0)
FairFAL is an adaptive federated active learning framework that enhances performance in class-imbalanced and non-IID settings.
- Stabilized Fine-Tuning with LoRA in Federated Learning: Mitigating the Side Effect of Client Size and Rank via the Scaling Factor(7.0)
SFed-LoRA stabilizes LoRA fine-tuning in federated learning by mitigating gradient collapse, enabling faster convergence and improved stability, making it a practical solution for privacy-preserving LLM adaptation.
- FedPSA: Modeling Behavioral Staleness in Asynchronous Federated Learning(7.0)
FedPSA enhances Federated Learning by dynamically adjusting to model obsolescence, significantly improving asynchronous training efficiency.
- Resource-Adaptive Federated Text Generation with Differential Privacy(7.0)
A federated learning framework that uses a hybrid approach of DP finetuning and DP voting to generate synthetic text datasets while adapting to client resource constraints.
- FLoRG: Federated Fine-tuning with Low-rank Gram Matrices and Procrustes Alignment(7.0)
FLoRG optimizes federated learning with low-rank matrices to boost model accuracy and reduce communication overhead.
- Split Federated Learning Architectures for High-Accuracy and Low-Delay Model Training(7.0)
Optimize split federated learning architectures for improved accuracy, reduced delay, and lower communication overhead.
- FedBCD:Communication-Efficient Accelerated Block Coordinate Gradient Descent for Federated Learning(7.0)
Develop a communication-efficient federated learning tool to significantly reduce overhead in large-scale model training.
- FedNSAM:Consistency of Local and Global Flatness for Federated Learning(7.0)
FedNSAM enhances federated learning by aligning local and global model flatness to improve generalization capabilities.
- Revisiting Gradient Staleness: Evaluating Distance Metrics for Asynchronous Federated Learning Aggregation(7.0)
Improve asynchronous federated learning with adaptive aggregation using enhanced distance metrics for more robust and efficient training.