ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.11875
  4. Cited By
Sparse is Enough in Fine-tuning Pre-trained Large Language Models

Sparse is Enough in Fine-tuning Pre-trained Large Language Models

19 December 2023
Weixi Song
Z. Li
Lefei Zhang
Hai Zhao
Bo Du
    VLM
ArXivPDFHTML

Papers citing "Sparse is Enough in Fine-tuning Pre-trained Large Language Models"

8 / 8 papers shown
Title
How Instruction and Reasoning Data shape Post-Training: Data Quality through the Lens of Layer-wise Gradients
How Instruction and Reasoning Data shape Post-Training: Data Quality through the Lens of Layer-wise Gradients
Ming Li
Y. Li
Ziyue Li
Tianyi Zhou
LRM
19
1
0
14 Apr 2025
Keeping Yourself is Important in Downstream Tuning Multimodal Large Language Model
Wenke Huang
Jian Liang
Xianda Guo
Yiyang Fang
Guancheng Wan
...
Bin Yang
He Li
Jiawei Shao
Mang Ye
Bo Du
OffRL
LRM
MLLM
KELM
VLM
63
1
0
06 Mar 2025
Parameter-Efficient Fine-Tuning of State Space Models
Parameter-Efficient Fine-Tuning of State Space Models
Kevin Galim
Wonjun Kang
Yuchen Zeng
H. Koo
Kangwook Lee
29
4
0
11 Oct 2024
Reference Trustable Decoding: A Training-Free Augmentation Paradigm for
  Large Language Models
Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language Models
Luohe Shi
Yao Yao
Zuchao Li
Lefei Zhang
Hai Zhao
14
0
0
30 Sep 2024
Sparse Matrix in Large Language Model Fine-tuning
Sparse Matrix in Large Language Model Fine-tuning
Haoze He
Juncheng Billy Li
Xuan Jiang
Heather Miller
MoE
19
3
0
24 May 2024
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,878
0
15 Sep 2016
Pac-Bayesian Supervised Classification: The Thermodynamics of
  Statistical Learning
Pac-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning
O. Catoni
135
451
0
03 Dec 2007
1