A Novel Collaborative Framework for Efficient Synchronization in Split Federated Learning over Wireless Networks
- FedML
Split Federated Learning (SFL) offers a promising approach for distributed model training in wireless networks, combining the layer-partitioning advantages of split learning with the federated aggregation that ensures global convergence. However, in heterogeneous wireless environments, disparities in device capabilities and channel conditions make strict round-based synchronization heavily straggler-dominated, thereby limiting both efficiency and scalability. To address this challenge, we propose a new framework, called Collaborative Split Federated Learning (CSFL), that redefines workload redistribution through device-to-device collaboration. Building on the flexibility of model partitioning, CSFL enables efficient devices, after completing their own forward propagation, to seamlessly take over the unfinished layers of bottleneck devices. This collaborative process, supported by D2D communications, allows bottleneck devices to offload computation earlier while maintaining synchronized progression across the network. Beyond the system design, we highlight key technical enablers such as privacy protection, multi-perspective matching, and incentive mechanisms, and discuss practical challenges including matching balance, privacy risks, and incentive sustainability. A case study demonstrates that CSFL significantly reduces training latency without compromising convergence speed or accuracy, underscoring collaboration as a key enabler for synchronization-efficient learning in next-generation wireless networks.
View on arXiv