Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2404.08080
Cited By
Variance-reduced Zeroth-Order Methods for Fine-Tuning Language Models
11 April 2024
Tanmay Gautam
Youngsuk Park
Hao Zhou
Parameswaran Raman
Wooseok Ha
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Variance-reduced Zeroth-Order Methods for Fine-Tuning Language Models"
5 / 5 papers shown
Title
MaZO: Masked Zeroth-Order Optimization for Multi-Task Fine-Tuning of Large Language Models
Zhen Zhang
Y. Yang
Kai Zhen
Nathan Susanj
Athanasios Mouchtaris
Siegfried Kunzmann
Zheng Zhang
51
0
0
17 Feb 2025
Scalable Back-Propagation-Free Training of Optical Physics-Informed Neural Networks
Yequan Zhao
Xinling Yu
Xian Xiao
Z. Chen
Z. Liu
G. Kurczveil
R. Beausoleil
S. Liu
Z. Zhang
47
0
0
17 Feb 2025
BBTv2: Towards a Gradient-Free Future with Large Language Models
Tianxiang Sun
Zhengfu He
Hong Qian
Yunhua Zhou
Xuanjing Huang
Xipeng Qiu
100
53
0
23 May 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
303
11,730
0
04 Mar 2022
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
1