ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.20579
  4. Cited By
Initialization Matters: Privacy-Utility Analysis of Overparameterized
  Neural Networks

Initialization Matters: Privacy-Utility Analysis of Overparameterized Neural Networks

31 October 2023
Jiayuan Ye
Zhenyu Zhu
Fanghui Liu
Reza Shokri
V. Cevher
ArXivPDFHTML

Papers citing "Initialization Matters: Privacy-Utility Analysis of Overparameterized Neural Networks"

7 / 7 papers shown
Title
Sampling is as easy as learning the score: theory for diffusion models
  with minimal data assumptions
Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions
Sitan Chen
Sinho Chewi
Jungshian Li
Yuanzhi Li
Adil Salim
Anru R. Zhang
DiffM
123
245
0
22 Sep 2022
Robustness in deep learning: The good (width), the bad (depth), and the
  ugly (initialization)
Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)
Zhenyu Zhu
Fanghui Liu
Grigorios G. Chrysos
V. Cevher
35
19
0
15 Sep 2022
Parameters or Privacy: A Provable Tradeoff Between Overparameterization
  and Membership Inference
Parameters or Privacy: A Provable Tradeoff Between Overparameterization and Membership Inference
Jasper Tan
Blake Mason
Hamid Javadi
Richard G. Baraniuk
FedML
26
19
0
02 Feb 2022
When is the Convergence Time of Langevin Algorithms Dimension
  Independent? A Composite Optimization Viewpoint
When is the Convergence Time of Langevin Algorithms Dimension Independent? A Composite Optimization Viewpoint
Y. Freund
Yi-An Ma
Tong Zhang
18
16
0
05 Oct 2021
Differentially Private Stochastic Optimization: New Results in Convex
  and Non-Convex Settings
Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings
Raef Bassily
Cristóbal Guzmán
Michael Menart
39
54
0
12 Jul 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
267
1,808
0
14 Dec 2020
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
  Results and Optimal Averaging Schemes
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
99
571
0
08 Dec 2012
1