Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.06714
Cited By
Exploring Memorization in Fine-tuned Language Models
10 October 2023
Shenglai Zeng
Yaxin Li
Jie Ren
Yiding Liu
Han Xu
Pengfei He
Yue Xing
Shuaiqiang Wang
Jiliang Tang
Dawei Yin
PILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Exploring Memorization in Fine-tuned Language Models"
9 / 9 papers shown
Title
ReCIT: Reconstructing Full Private Data from Gradient in Parameter-Efficient Fine-Tuning of Large Language Models
Jin Xie
Ruishi He
Songze Li
Xiaojun Jia
Shouling Ji
SILM
AAML
66
0
0
29 Apr 2025
On the Impact of Fine-Tuning on Chain-of-Thought Reasoning
Elita Lobo
Chirag Agarwal
Himabindu Lakkaraju
LRM
70
5
0
22 Nov 2024
Undesirable Memorization in Large Language Models: A Survey
Ali Satvaty
Suzan Verberne
Fatih Turkmen
ELM
PILM
69
7
0
03 Oct 2024
Memorization in NLP Fine-tuning Methods
Fatemehsadat Mireshghallah
Archit Uniyal
Tianhao Wang
David E. Evans
Taylor Berg-Kirkpatrick
AAML
61
39
0
25 May 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
303
11,881
0
04 Mar 2022
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
236
804
0
14 Oct 2021
Deduplicating Training Data Makes Language Models Better
Katherine Lee
Daphne Ippolito
A. Nystrom
Chiyuan Zhang
Douglas Eck
Chris Callison-Burch
Nicholas Carlini
SyDa
237
590
0
14 Jul 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,835
0
18 Apr 2021
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
267
1,808
0
14 Dec 2020
1