28
1

Parallel Split Learning with Global Sampling

Abstract

Distributed deep learning in resource-constrained environments faces scalability and generalization challenges due to large effective batch sizes and non-identically distributed client data. We introduce a server-driven sampling strategy that maintains a fixed global batch size by dynamically adjusting client-side batch sizes. This decouples the effective batch size from the number of participating devices and ensures that global batches better reflect the overall data distribution. Using standard concentration bounds, we establish tighter deviation guarantees compared to existing approaches. Empirical results on a benchmark dataset confirm that the proposed method improves model accuracy, training efficiency, and convergence stability, offering a scalable solution for learning at the network edge.

View on arXiv
@article{kohankhaki2025_2407.15738,
  title={ Parallel Split Learning with Global Sampling },
  author={ Mohammad Kohankhaki and Ahmad Ayad and Mahdi Barhoush and Anke Schmeink },
  journal={arXiv preprint arXiv:2407.15738},
  year={ 2025 }
}
Comments on this paper