Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
All Papers
0 / 0 papers shown
Title
Home
Papers
2306.07052
Cited By
Gradient Ascent Post-training Enhances Language Model Generalization
Annual Meeting of the Association for Computational Linguistics (ACL), 2023
12 June 2023
Dongkeun Yoon
Joel Jang
Sungdong Kim
Minjoon Seo
VLM
AI4CE
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Gradient Ascent Post-training Enhances Language Model Generalization"
3 / 3 papers shown
Title
RKLD: Reverse KL-Divergence-based Knowledge Distillation for Unlearning Personal Information in Large Language Models
Bichen Wang
Yuzhe Zi
Yixin Sun
Yanyan Zhao
Bing Qin
MU
250
17
0
04 Jun 2024
Digital Forgetting in Large Language Models: A Survey of Unlearning Methods
Artificial Intelligence Review (Artif Intell Rev), 2024
Alberto Blanco-Justicia
N. Jebreel
Benet Manzanares-Salor
David Sánchez
Josep Domingo-Ferrer
Guillem Collell
Kuan Eeik Tan
KELM
MU
276
37
0
02 Apr 2024
Baby Llama: knowledge distillation from an ensemble of teachers trained on a small dataset with no performance penalty
I. Timiryasov
J. Tastet
302
71
0
03 Aug 2023
1