ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.03033
14
2

Towards Sharper Utility Bounds for Differentially Private Pairwise Learning

7 May 2021
Yilin Kang
Yong Liu
Jian Li
Weiping Wang
    FedML
ArXivPDFHTML
Abstract

Pairwise learning focuses on learning tasks with pairwise loss functions, depends on pairs of training instances, and naturally fits for modeling relationships between pairs of samples. In this paper, we focus on the privacy of pairwise learning and propose a new differential privacy paradigm for pairwise learning, based on gradient perturbation. Except for the privacy guarantees, we also analyze the excess population risk and give corresponding bounds under both expectation and high probability conditions. We use the \textit{on-average stability} and the \textit{pairwise locally elastic stability} theories to analyze the expectation bound and the high probability bound, respectively. Moreover, our analyzed utility bounds do not require convex pairwise loss functions, which means that our method is general to both convex and non-convex conditions. Under these circumstances, the utility bounds are similar to (or better than) previous bounds under convexity or strongly convexity assumption, which are attractive results.

View on arXiv
Comments on this paper