ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.18060
  4. Cited By
AdaZeta: Adaptive Zeroth-Order Tensor-Train Adaption for
  Memory-Efficient Large Language Models Fine-Tuning

AdaZeta: Adaptive Zeroth-Order Tensor-Train Adaption for Memory-Efficient Large Language Models Fine-Tuning

26 June 2024
Yifan Yang
Kai Zhen
Ershad Banijamal
Athanasios Mouchtaris
Zheng Zhang
ArXivPDFHTML

Papers citing "AdaZeta: Adaptive Zeroth-Order Tensor-Train Adaption for Memory-Efficient Large Language Models Fine-Tuning"

4 / 4 papers shown
Title
MaZO: Masked Zeroth-Order Optimization for Multi-Task Fine-Tuning of Large Language Models
MaZO: Masked Zeroth-Order Optimization for Multi-Task Fine-Tuning of Large Language Models
Zhen Zhang
Y. Yang
Kai Zhen
Nathan Susanj
Athanasios Mouchtaris
Siegfried Kunzmann
Zheng Zhang
54
0
0
17 Feb 2025
Scalable Back-Propagation-Free Training of Optical Physics-Informed Neural Networks
Scalable Back-Propagation-Free Training of Optical Physics-Informed Neural Networks
Yequan Zhao
Xinling Yu
Xian Xiao
Z. Chen
Z. Liu
G. Kurczveil
R. Beausoleil
S. Liu
Z. Zhang
49
0
0
17 Feb 2025
Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM
  Fine-Tuning
Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning
Yong Liu
Zirui Zhu
Chaoyu Gong
Minhao Cheng
Cho-Jui Hsieh
Yang You
MoE
30
16
0
24 Feb 2024
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
1