One-shot Federated Learning (OFL) is a distributed machine learning paradigm that constrains client-server communication to a single round, addressing privacy and communication overhead issues associated with multiple rounds of data exchange in traditional Federated Learning (FL). OFL demonstrates the practical potential for integration with future approaches that require collaborative training models, such as large language models (LLMs). However, current OFL methods face two major challenges: data heterogeneity and model heterogeneity, which result in subpar performance compared to conventional FL methods. Worse still, despite numerous studies addressing these limitations, a comprehensive summary is still lacking. To address these gaps, this paper presents a systematic analysis of the challenges faced by OFL and thoroughly reviews the current methods. We also offer an innovative categorization method and analyze the trade-offs of various techniques. Additionally, we discuss the most promising future directions and the technologies that should be integrated into the OFL field. This work aims to provide guidance and insights for future research.
View on arXiv@article{liu2025_2502.09104, title={ One-shot Federated Learning Methods: A Practical Guide }, author={ Xiang Liu and Zhenheng Tang and Xia Li and Yijun Song and Sijie Ji and Zemin Liu and Bo Han and Linshan Jiang and Jialin Li }, journal={arXiv preprint arXiv:2502.09104}, year={ 2025 } }