A Taxonomy of Attacks and Defenses in Split Learning

Split Learning (SL) has emerged as a promising paradigm for distributed deep learning, allowing resource-constrained clients to offload portions of their model computation to servers while maintaining collaborative learning. However, recent research has demonstrated that SL remains vulnerable to a range of privacy and security threats, including information leakage, model inversion, and adversarial attacks. While various defense mechanisms have been proposed, a systematic understanding of the attack landscape and corresponding countermeasures is still lacking. In this study, we present a comprehensive taxonomy of attacks and defenses in SL, categorizing them along three key dimensions: employed strategies, constraints, and effectiveness. Furthermore, we identify key open challenges and research gaps in SL based on our systematization, highlighting potential future directions.
View on arXiv@article{shabbir2025_2505.05872, title={ A Taxonomy of Attacks and Defenses in Split Learning }, author={ Aqsa Shabbir and Halil İbrahim Kanpak and Alptekin Küpçü and Sinem Sav }, journal={arXiv preprint arXiv:2505.05872}, year={ 2025 } }