ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.03446
  4. Cited By
Students Parrot Their Teachers: Membership Inference on Model
  Distillation

Students Parrot Their Teachers: Membership Inference on Model Distillation

Neural Information Processing Systems (NeurIPS), 2023
6 March 2023
Matthew Jagielski
Milad Nasr
Christopher A. Choquette-Choo
Katherine Lee
Nicholas Carlini
    FedML
ArXiv (abs)PDFHTML

Papers citing "Students Parrot Their Teachers: Membership Inference on Model Distillation"

20 / 20 papers shown
Title
Imitative Membership Inference Attack
Imitative Membership Inference Attack
Yuntao Du
Yuetian Chen
Hanshen Xiao
Bruno Ribeiro
Ninghui Li
100
0
0
08 Sep 2025
Synthetic Adaptive Guided Embeddings (SAGE): A Novel Knowledge Distillation Method
Synthetic Adaptive Guided Embeddings (SAGE): A Novel Knowledge Distillation Method
Suleyman O. Polat
Poli A. Nemkova
Mark V. Albert
84
0
0
20 Aug 2025
Membership and Memorization in LLM Knowledge Distillation
Membership and Memorization in LLM Knowledge Distillation
Ziqi Zhang
Ali Shahin Shamsabadi
Hanxiao Lu
Yifeng Cai
Hamed Haddadi
84
0
0
09 Aug 2025
Cascading and Proxy Membership Inference Attacks
Cascading and Proxy Membership Inference Attacks
Yuntao Du
Jiacheng Li
Yuetian Chen
Kaiyuan Zhang
Zhizhen Yuan
Hanshen Xiao
Bruno Ribeiro
Ninghui Li
214
2
0
29 Jul 2025
Multidimensional Analysis of Specific Language Impairment Using Unsupervised Learning Through PCA and Clustering
Multidimensional Analysis of Specific Language Impairment Using Unsupervised Learning Through PCA and ClusteringIEEE International Conference on Healthcare Informatics (ICHI), 2025
Niruthiha Selvanayagam
160
0
0
05 Jun 2025
Language Models May Verbatim Complete Text They Were Not Explicitly Trained On
Language Models May Verbatim Complete Text They Were Not Explicitly Trained On
Katja Filippova
Christopher A. Choquette-Choo
Matthew Jagielski
Peter Kairouz
Sanmi Koyejo
Abigail Z. Jacobs
Nicolas Papernot
391
11
0
21 Mar 2025
Privacy Auditing of Large Language ModelsInternational Conference on Learning Representations (ICLR), 2025
Ashwinee Panda
Xinyu Tang
Milad Nasr
Christopher A. Choquette-Choo
Prateek Mittal
PILM
291
18
0
09 Mar 2025
Privacy Ripple Effects from Adding or Removing Personal Information in Language Model Training
Privacy Ripple Effects from Adding or Removing Personal Information in Language Model TrainingAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Jaydeep Borkar
Matthew Jagielski
Katherine Lee
Niloofar Mireshghallah
David A. Smith
Christopher A. Choquette-Choo
PILM
572
6
0
21 Feb 2025
Memorization Inheritance in Sequence-Level Knowledge Distillation for Neural Machine Translation
Memorization Inheritance in Sequence-Level Knowledge Distillation for Neural Machine TranslationAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Verna Dankers
Vikas Raunak
VLM
615
1
0
03 Feb 2025
Generalizing Trust: Weak-to-Strong Trustworthiness in Language Models
Martin Pawelczyk
Lillian Sun
Zhenting Qi
Aounon Kumar
Himabindu Lakkaraju
335
5
0
03 Jan 2025
Dataset Size Recovery from LoRA Weights
Dataset Size Recovery from LoRA Weights
Mohammad Salama
Jonathan Kahana
Eliahu Horwitz
Yedid Hoshen
215
6
0
27 Jun 2024
DPDR: Gradient Decomposition and Reconstruction for Differentially
  Private Deep Learning
DPDR: Gradient Decomposition and Reconstruction for Differentially Private Deep Learning
Yixuan Liu
Li Xiong
Yuhan Liu
Yujie Gu
Ruixuan Liu
Hong Chen
298
4
0
04 Jun 2024
GLiRA: Black-Box Membership Inference Attack via Knowledge Distillation
GLiRA: Black-Box Membership Inference Attack via Knowledge Distillation
Andrey V. Galichin
Mikhail Aleksandrovich Pautov
Alexey Zhavoronkin
Oleg Y. Rogov
Ivan Oseledets
AAML
231
6
0
13 May 2024
Teach LLMs to Phish: Stealing Private Information from Language Models
Teach LLMs to Phish: Stealing Private Information from Language Models
Ashwinee Panda
Christopher A. Choquette-Choo
Zhengming Zhang
Yaoqing Yang
Prateek Mittal
PILM
270
34
0
01 Mar 2024
Auditing Private Prediction
Auditing Private Prediction
Karan Chadha
Matthew Jagielski
Nicolas Papernot
Christopher A. Choquette-Choo
Milad Nasr
252
9
0
14 Feb 2024
Learning-Based Difficulty Calibration for Enhanced Membership Inference
  Attacks
Learning-Based Difficulty Calibration for Enhanced Membership Inference AttacksEuropean Symposium on Security and Privacy (EuroS&P), 2024
Haonan Shi
Ouyang Tu
An Wang
240
4
0
10 Jan 2024
User Inference Attacks on Large Language Models
User Inference Attacks on Large Language Models
Nikhil Kandpal
Krishna Pillutla
Alina Oprea
Peter Kairouz
Christopher A. Choquette-Choo
Zheng Xu
SILMAAML
313
31
0
13 Oct 2023
Membership Inference Attacks on DNNs using Adversarial Perturbations
Membership Inference Attacks on DNNs using Adversarial Perturbations
Hassan Ali
Adnan Qayyum
Ala I. Al-Fuqaha
Junaid Qadir
AAML
253
3
0
11 Jul 2023
TMI! Finetuned Models Leak Private Information from their Pretraining
  Data
TMI! Finetuned Models Leak Private Information from their Pretraining DataProceedings on Privacy Enhancing Technologies (PoPETs), 2023
John Abascal
Stanley Wu
Alina Oprea
Jonathan R. Ullman
238
22
0
01 Jun 2023
Red Teaming Language Model Detectors with Language Models
Red Teaming Language Model Detectors with Language ModelsTransactions of the Association for Computational Linguistics (TACL), 2023
Zhouxing Shi
Yihan Wang
Fan Yin
Xiangning Chen
Kai-Wei Chang
Cho-Jui Hsieh
DeLMO
222
64
0
31 May 2023
1