ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.07818
  4. Cited By
Differentially Private Zeroth-Order Methods for Scalable Large Language
  Model Finetuning

Differentially Private Zeroth-Order Methods for Scalable Large Language Model Finetuning

12 February 2024
Zhicheng Liu
Jian Lou
W. Bao
Y. Hu
Baochun Li
Z. Qin
K. Ren
ArXivPDFHTML

Papers citing "Differentially Private Zeroth-Order Methods for Scalable Large Language Model Finetuning"

12 / 12 papers shown
Title
Forward Learning with Differential Privacy
Forward Learning with Differential Privacy
Mingqian Feng
Zeliang Zhang
Jinyang Jiang
Yijie Peng
Chenliang Xu
39
0
0
01 Apr 2025
DC-SGD: Differentially Private SGD with Dynamic Clipping through Gradient Norm Distribution Estimation
DC-SGD: Differentially Private SGD with Dynamic Clipping through Gradient Norm Distribution Estimation
Chengkun Wei
Weixian Li
Chen Gong
Wenzhi Chen
41
0
0
29 Mar 2025
Towards hyperparameter-free optimization with differential privacy
Zhiqi Bu
Ruixuan Liu
24
1
0
02 Mar 2025
REFINE: Inversion-Free Backdoor Defense via Model Reprogramming
REFINE: Inversion-Free Backdoor Defense via Model Reprogramming
Y. Chen
Shuo Shao
Enhao Huang
Yiming Li
Pin-Yu Chen
Z. Qin
Kui Ren
AAML
30
3
0
22 Feb 2025
Second-Order Fine-Tuning without Pain for LLMs:A Hessian Informed Zeroth-Order Optimizer
Second-Order Fine-Tuning without Pain for LLMs:A Hessian Informed Zeroth-Order Optimizer
Yanjun Zhao
Sizhe Dang
Haishan Ye
Guang Dai
Yi Qian
Ivor W.Tsang
58
8
0
23 Feb 2024
Private Fine-tuning of Large Language Models with Zeroth-order Optimization
Private Fine-tuning of Large Language Models with Zeroth-order Optimization
Xinyu Tang
Ashwinee Panda
Milad Nasr
Saeed Mahloujifar
Prateek Mittal
42
18
0
09 Jan 2024
DPZero: Private Fine-Tuning of Language Models without Backpropagation
DPZero: Private Fine-Tuning of Language Models without Backpropagation
Liang Zhang
Bingcong Li
K. K. Thekumparampil
Sewoong Oh
Niao He
22
7
0
14 Oct 2023
Differential Privacy Meets Neural Network Pruning
Differential Privacy Meets Neural Network Pruning
Kamil Adamczewski
Mijung Park
SyDa
18
3
0
08 Mar 2023
Differentially Private Fine-tuning of Language Models
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
131
258
0
13 Oct 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,898
0
31 Dec 2020
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
264
1,798
0
14 Dec 2020
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
114
1,190
0
16 Aug 2016
1