Contrastive Learning with Nasty Noise

Abstract
Contrastive learning has emerged as a powerful paradigm for self-supervised representation learning. This work analyzes the theoretical limits of contrastive learning under nasty noise, where an adversary modifies or replaces training samples. Using PAC learning and VC-dimension analysis, lower and upper bounds on sample complexity in adversarial settings are established. Additionally, data-dependent sample complexity bounds based on the l2-distance function are derived.
View on arXiv@article{zhao2025_2502.17872, title={ Contrastive Learning with Nasty Noise }, author={ Ziruo Zhao }, journal={arXiv preprint arXiv:2502.17872}, year={ 2025 } }
Comments on this paper