ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.15751
  4. Cited By
Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM
  Fine-Tuning

Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning

24 February 2024
Yong Liu
Zirui Zhu
Chaoyu Gong
Minhao Cheng
Cho-Jui Hsieh
Yang You
    MoE
ArXivPDFHTML

Papers citing "Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning"

13 / 13 papers shown
Title
Stochastic Subspace Descent Accelerated via Bi-fidelity Line Search
Stochastic Subspace Descent Accelerated via Bi-fidelity Line Search
Nuojin Cheng
Alireza Doostan
Stephen Becker
34
0
0
30 Apr 2025
Perturbation-efficient Zeroth-order Optimization for Hardware-friendly On-device Training
Perturbation-efficient Zeroth-order Optimization for Hardware-friendly On-device Training
Qitao Tan
Sung-En Chang
Rui Xia
Huidong Ji
Chence Yang
...
Zheng Zhan
Zhou Zou
Y. Wang
Jin Lu
Geng Yuan
41
0
0
28 Apr 2025
SubZero: Composing Subject, Style, and Action via Zero-Shot Personalization
SubZero: Composing Subject, Style, and Action via Zero-Shot Personalization
Shubhankar Borse
K. Bhardwaj
Mohammad Reza Karimi Dastjerdi
Hyojin Park
Shreya Kadambi
...
Prathamesh Mandke
Ankita Nayak
Harris Teague
Munawar Hayat
Fatih Porikli
DiffM
75
1
0
27 Feb 2025
QuZO: Quantized Zeroth-Order Fine-Tuning for Large Language Models
QuZO: Quantized Zeroth-Order Fine-Tuning for Large Language Models
Jiajun Zhou
Yifan Yang
Kai Zhen
Z. Liu
Yequan Zhao
Ershad Banijamali
Athanasios Mouchtaris
Ngai Wong
Zheng Zhang
MQ
41
0
0
17 Feb 2025
MaZO: Masked Zeroth-Order Optimization for Multi-Task Fine-Tuning of Large Language Models
MaZO: Masked Zeroth-Order Optimization for Multi-Task Fine-Tuning of Large Language Models
Zhen Zhang
Y. Yang
Kai Zhen
Nathan Susanj
Athanasios Mouchtaris
Siegfried Kunzmann
Zheng Zhang
51
0
0
17 Feb 2025
Scalable Back-Propagation-Free Training of Optical Physics-Informed Neural Networks
Scalable Back-Propagation-Free Training of Optical Physics-Informed Neural Networks
Yequan Zhao
Xinling Yu
Xian Xiao
Z. Chen
Z. Liu
G. Kurczveil
R. Beausoleil
S. Liu
Z. Zhang
45
0
0
17 Feb 2025
ElasticZO: A Memory-Efficient On-Device Learning with Combined Zeroth- and First-Order Optimization
ElasticZO: A Memory-Efficient On-Device Learning with Combined Zeroth- and First-Order Optimization
Keisuke Sugiura
Hiroki Matsutani
MQ
36
1
0
08 Jan 2025
Simultaneous Computation and Memory Efficient Zeroth-Order Optimizer for
  Fine-Tuning Large Language Models
Simultaneous Computation and Memory Efficient Zeroth-Order Optimizer for Fine-Tuning Large Language Models
Fei Wang
Li Shen
Liang Ding
Chao Xue
Ye Liu
Changxing Ding
28
0
0
13 Oct 2024
Zeroth-Order Fine-Tuning of LLMs in Random Subspaces
Zeroth-Order Fine-Tuning of LLMs in Random Subspaces
Ziming Yu
Pan Zhou
Sike Wang
Jia Li
Hua Huang
18
0
0
11 Oct 2024
AdaZeta: Adaptive Zeroth-Order Tensor-Train Adaption for
  Memory-Efficient Large Language Models Fine-Tuning
AdaZeta: Adaptive Zeroth-Order Tensor-Train Adaption for Memory-Efficient Large Language Models Fine-Tuning
Yifan Yang
Kai Zhen
Ershad Banijamal
Athanasios Mouchtaris
Zheng Zhang
26
8
0
26 Jun 2024
Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity
Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity
Wentao Guo
Jikai Long
Yimeng Zeng
Zirui Liu
Xinyu Yang
...
Osbert Bastani
Christopher De Sa
Xiaodong Yu
Beidi Chen
Zhaozhuo Xu
26
14
0
05 Jun 2024
BBTv2: Towards a Gradient-Free Future with Large Language Models
BBTv2: Towards a Gradient-Free Future with Large Language Models
Tianxiang Sun
Zhengfu He
Hong Qian
Yunhua Zhou
Xuanjing Huang
Xipeng Qiu
100
53
0
23 May 2022
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
1