Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.00867
Cited By
Do Compressed LLMs Forget Knowledge? An Experimental Study with Practical Implications
2 October 2023
Duc N. M. Hoang
Minsik Cho
Thomas Merth
Mohammad Rastegari
Zhangyang Wang
KELM
CLL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Do Compressed LLMs Forget Knowledge? An Experimental Study with Practical Implications"
4 / 4 papers shown
Title
Polysemy of Synthetic Neurons Towards a New Type of Explanatory Categorical Vector Spaces
Michael Pichat
William Pogrund
Paloma Pichat
Judicael Poumay
Armanouche Gasparian
Samuel Demarchi
Martin Corbet
Alois Georgeon
Michael Veillet-Guillem
MILM
21
0
0
30 Apr 2025
Composable Interventions for Language Models
Arinbjorn Kolbeinsson
Kyle O'Brien
Tianjin Huang
Shanghua Gao
Shiwei Liu
...
Anurag J. Vaidya
Faisal Mahmood
Marinka Zitnik
Tianlong Chen
Thomas Hartvigsen
KELM
MU
82
5
0
09 Jul 2024
Model ensemble instead of prompt fusion: a sample-specific knowledge transfer method for few-shot prompt tuning
Xiangyu Peng
Chen Xing
Prafulla Kumar Choubey
Chien-Sheng Wu
Caiming Xiong
VLM
78
11
0
23 Oct 2022
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,843
0
18 Apr 2021
1