ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.06783
  4. Cited By
Evading Curse of Dimensionality in Unconstrained Private GLMs via
  Private Gradient Descent

Evading Curse of Dimensionality in Unconstrained Private GLMs via Private Gradient Descent

11 June 2020
Shuang Song
Thomas Steinke
Om Thakkar
Abhradeep Thakurta
ArXivPDFHTML

Papers citing "Evading Curse of Dimensionality in Unconstrained Private GLMs via Private Gradient Descent"

8 / 8 papers shown
Title
DC-SGD: Differentially Private SGD with Dynamic Clipping through Gradient Norm Distribution Estimation
DC-SGD: Differentially Private SGD with Dynamic Clipping through Gradient Norm Distribution Estimation
Chengkun Wei
Weixian Li
Chen Gong
Wenzhi Chen
55
0
0
29 Mar 2025
PILLAR: How to make semi-private learning more effective
PILLAR: How to make semi-private learning more effective
Francesco Pinto
Yaxian Hu
Fanny Yang
Amartya Sanyal
40
11
0
06 Jun 2023
Near Optimal Private and Robust Linear Regression
Near Optimal Private and Robust Linear Regression
Xiyang Liu
Prateek Jain
Weihao Kong
Sewoong Oh
A. Suggala
33
9
0
30 Jan 2023
Differentially Private Image Classification from Features
Differentially Private Image Classification from Features
Harsh Mehta
Walid Krichene
Abhradeep Thakurta
Alexey Kurakin
Ashok Cutkosky
46
7
0
24 Nov 2022
When Does Differentially Private Learning Not Suffer in High Dimensions?
When Does Differentially Private Learning Not Suffer in High Dimensions?
Xuechen Li
Daogao Liu
Tatsunori Hashimoto
Huseyin A. Inan
Janardhan Kulkarni
Y. Lee
Abhradeep Thakurta
25
58
0
01 Jul 2022
Toward Training at ImageNet Scale with Differential Privacy
Toward Training at ImageNet Scale with Differential Privacy
Alexey Kurakin
Shuang Song
Steve Chien
Roxana Geambasu
Andreas Terzis
Abhradeep Thakurta
22
99
0
28 Jan 2022
Understanding Clipping for Federated Learning: Convergence and
  Client-Level Differential Privacy
Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy
Xinwei Zhang
Xiangyi Chen
Min-Fong Hong
Zhiwei Steven Wu
Jinfeng Yi
FedML
22
90
0
25 Jun 2021
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
  Results and Optimal Averaging Schemes
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
99
570
0
08 Dec 2012
1