Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2401.15207
Cited By
HiFT: A Hierarchical Full Parameter Fine-Tuning Strategy
26 January 2024
Yongkang Liu
Yiqun Zhang
Qian Li
Tong Liu
Shi Feng
Daling Wang
Yifei Zhang
Hinrich Schütze
Re-assign community
ArXiv
PDF
HTML
Papers citing
"HiFT: A Hierarchical Full Parameter Fine-Tuning Strategy"
7 / 7 papers shown
Title
BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models
Qi Luo
Hengxu Yu
Xiao Li
34
1
0
03 Apr 2024
GNNavi: Navigating the Information Flow in Large Language Models by Graph Neural Network
Shuzhou Yuan
Ercong Nie
Michael Farber
Helmut Schmid
Hinrich Schütze
30
3
0
18 Feb 2024
YUAN 2.0: A Large Language Model with Localized Filtering-based Attention
Shaohua Wu
Xudong Zhao
Shenling Wang
Jiangang Luo
Lingjun Li
...
Wei Wang
Tong Yu
Rongguo Zhang
Jiahua Zhang
Chao Wang
OSLM
40
6
0
27 Nov 2023
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,835
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,913
0
31 Dec 2020
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
243
1,815
0
17 Sep 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
1