Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2301.13104
Cited By
Equivariant Differentially Private Deep Learning: Why DP-SGD Needs Sparser Models
30 January 2023
Florian A. Hölzl
Daniel Rueckert
Georgios Kaissis
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Equivariant Differentially Private Deep Learning: Why DP-SGD Needs Sparser Models"
5 / 5 papers shown
Title
Differentially Private Active Learning: Balancing Effective Data Selection and Privacy
Kristian Schwethelm
Johannes Kaiser
Jonas Kuntzer
Mehmet Yigitsoy
Daniel Rueckert
Georgios Kaissis
27
0
0
01 Oct 2024
Not all noise is accounted equally: How differentially private learning benefits from large sampling rates
Friedrich Dörmann
Osvald Frisk
L. Andersen
Christian Fischer Pedersen
FedML
44
25
0
12 Oct 2021
Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for Private Learning
Da Yu
Huishuai Zhang
Wei Chen
Tie-Yan Liu
FedML
SILM
91
110
0
25 Feb 2021
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
136
679
0
31 Jan 2021
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
267
1,798
0
14 Dec 2020
1