29
0

Randomized Pairwise Learning with Adaptive Sampling: A PAC-Bayes Analysis

Abstract

We study stochastic optimization with data-adaptive sampling schemes to train pairwise learning models. Pairwise learning is ubiquitous, and it covers several popular learning tasks such as ranking, metric learning and AUC maximization. A notable difference of pairwise learning from pointwise learning is the statistical dependencies among input pairs, for which existing analyses have not been able to handle in the general setting considered in this paper. To this end, we extend recent results that blend together two algorithm-dependent frameworks of analysis -- algorithmic stability and PAC-Bayes -- which allow us to deal with any data-adaptive sampling scheme in the optimizer. We instantiate this framework to analyze (1) pairwise stochastic gradient descent, which is a default workhorse in many machine learning problems, and (2) pairwise stochastic gradient descent ascent, which is a method used in adversarial training. All of these algorithms make use of a stochastic sampling from a discrete distribution (sample indices) before each update. Non-uniform sampling of these indices has been already suggested in the recent literature, to which our work provides generalization guarantees in both smooth and non-smooth convex problems.

View on arXiv
@article{zhou2025_2504.02957,
  title={ Randomized Pairwise Learning with Adaptive Sampling: A PAC-Bayes Analysis },
  author={ Sijia Zhou and Yunwen Lei and Ata Kabán },
  journal={arXiv preprint arXiv:2504.02957},
  year={ 2025 }
}
Comments on this paper