ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.17529
  4. Cited By
Clip Body and Tail Separately: High Probability Guarantees for DPSGD
  with Heavy Tails

Clip Body and Tail Separately: High Probability Guarantees for DPSGD with Heavy Tails

27 May 2024
Haichao Sha
Yang Cao
Yong Liu
Yuncheng Wu
Ruixuan Liu
Hong Chen
ArXivPDFHTML

Papers citing "Clip Body and Tail Separately: High Probability Guarantees for DPSGD with Heavy Tails"

10 / 10 papers shown
Title
DC-SGD: Differentially Private SGD with Dynamic Clipping through Gradient Norm Distribution Estimation
DC-SGD: Differentially Private SGD with Dynamic Clipping through Gradient Norm Distribution Estimation
Chengkun Wei
Weixian Li
Chen Gong
Wenzhi Chen
53
0
0
29 Mar 2025
From Gradient Clipping to Normalization for Heavy Tailed SGD
From Gradient Clipping to Normalization for Heavy Tailed SGD
Florian Hübler
Ilyas Fatkhullin
Niao He
40
5
0
17 Oct 2024
PCDP-SGD: Improving the Convergence of Differentially Private SGD via Projection in Advance
PCDP-SGD: Improving the Convergence of Differentially Private SGD via Projection in Advance
Haichao Sha
Ruixuan Liu
Yi-xiao Liu
Hong Chen
41
1
0
06 Dec 2023
High Probability Analysis for Non-Convex Stochastic Optimization with
  Clipping
High Probability Analysis for Non-Convex Stochastic Optimization with Clipping
Shaojie Li
Yong Liu
27
2
0
25 Jul 2023
Revisiting Gradient Clipping: Stochastic bias and tight convergence
  guarantees
Revisiting Gradient Clipping: Stochastic bias and tight convergence guarantees
Anastasia Koloskova
Hadrien Hendrikx
Sebastian U. Stich
104
48
0
02 May 2023
Private Stochastic Optimization With Large Worst-Case Lipschitz
  Parameter: Optimal Rates for (Non-Smooth) Convex Losses and Extension to
  Non-Convex Losses
Private Stochastic Optimization With Large Worst-Case Lipschitz Parameter: Optimal Rates for (Non-Smooth) Convex Losses and Extension to Non-Convex Losses
Andrew Lowy
Meisam Razaviyayn
30
13
0
15 Sep 2022
Influence-Balanced Loss for Imbalanced Visual Classification
Influence-Balanced Loss for Imbalanced Visual Classification
Seulki Park
Jongin Lim
Younghan Jeon
J. Choi
CVBM
82
131
0
06 Oct 2021
Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for
  Private Learning
Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for Private Learning
Da Yu
Huishuai Zhang
Wei Chen
Tie-Yan Liu
FedML
SILM
91
110
0
25 Feb 2021
A High Probability Analysis of Adaptive SGD with Momentum
A High Probability Analysis of Adaptive SGD with Momentum
Xiaoyun Li
Francesco Orabona
79
64
0
28 Jul 2020
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
261
10,196
0
16 Nov 2016
1