ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.00473
  4. Cited By
Privacy Backdoors: Stealing Data with Corrupted Pretrained Models

Privacy Backdoors: Stealing Data with Corrupted Pretrained Models

30 March 2024
Shanglun Feng
Florian Tramèr
    SILM
ArXivPDFHTML

Papers citing "Privacy Backdoors: Stealing Data with Corrupted Pretrained Models"

18 / 18 papers shown
Title
Behavior Backdoor for Deep Learning Models
Behavior Backdoor for Deep Learning Models
J. T. Wang
Pengfei Zhang
R. Tao
Jian Yang
Hao Liu
X. Liu
Y. X. Wei
Yao Zhao
AAML
75
0
0
02 Dec 2024
Backdoored Retrievers for Prompt Injection Attacks on Retrieval
  Augmented Generation of Large Language Models
Backdoored Retrievers for Prompt Injection Attacks on Retrieval Augmented Generation of Large Language Models
Cody Clop
Yannick Teglia
AAML
SILM
RALM
40
2
0
18 Oct 2024
Forget to Flourish: Leveraging Machine-Unlearning on Pretrained Language
  Models for Privacy Leakage
Forget to Flourish: Leveraging Machine-Unlearning on Pretrained Language Models for Privacy Leakage
Md. Rafi Ur Rashid
Jing Liu
T. Koike-Akino
Shagufta Mehnaz
Ye Wang
MU
SILM
36
3
0
30 Aug 2024
Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream
  Machine Learning Services
Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream Machine Learning Services
Shaopeng Fu
Xuexue Sun
Ke Qing
Tianhang Zheng
Di Wang
AAML
MIACV
SILM
48
0
0
05 Aug 2024
Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model
Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model
Tudor Cebere
A. Bellet
Nicolas Papernot
28
9
0
23 May 2024
Nearly Tight Black-Box Auditing of Differentially Private Machine
  Learning
Nearly Tight Black-Box Auditing of Differentially Private Machine Learning
Meenatchi Sundaram Muthu Selva Annamalai
Emiliano De Cristofaro
28
11
0
23 May 2024
Privacy Backdoors: Enhancing Membership Inference through Poisoning
  Pre-trained Models
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models
Yuxin Wen
Leo Marchyok
Sanghyun Hong
Jonas Geiping
Tom Goldstein
Nicholas Carlini
SILM
AAML
26
9
0
01 Apr 2024
PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy
  Traps
PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps
Ruixuan Liu
Tianhao Wang
Yang Cao
Li Xiong
AAML
SILM
45
15
0
14 Mar 2024
Visual Privacy Auditing with Diffusion Models
Visual Privacy Auditing with Diffusion Models
Kristian Schwethelm
Johannes Kaiser
Moritz Knolle
Daniel Rueckert
Daniel Rueckert
Alexander Ziller
DiffM
AAML
33
0
0
12 Mar 2024
FLTrojan: Privacy Leakage Attacks against Federated Language Models Through Selective Weight Tampering
FLTrojan: Privacy Leakage Attacks against Federated Language Models Through Selective Weight Tampering
Md. Rafi Ur Rashid
Vishnu Asutosh Dasu
Kang Gu
Najrin Sultana
Shagufta Mehnaz
AAML
FedML
44
10
0
24 Oct 2023
Polynomial Time Cryptanalytic Extraction of Neural Network Models
Polynomial Time Cryptanalytic Extraction of Neural Network Models
Adi Shamir
Isaac Canales-Martínez
Anna Hambitzer
J. Chávez-Saab
Francisco Rodríguez-Henríquez
Nitin Satpute
AAML
MLAU
37
13
0
12 Oct 2023
Identifying and Mitigating Privacy Risks Stemming from Language Models:
  A Survey
Identifying and Mitigating Privacy Risks Stemming from Language Models: A Survey
Victoria Smith
Ali Shahin Shamsabadi
Carolyn Ashurst
Adrian Weller
PILM
32
24
0
27 Sep 2023
Fishing for User Data in Large-Batch Federated Learning via Gradient
  Magnification
Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification
Yuxin Wen
Jonas Geiping
Liam H. Fowl
Micah Goldblum
Tom Goldstein
FedML
79
92
0
01 Feb 2022
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning
  for Language Models
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
Liam H. Fowl
Jonas Geiping
Steven Reich
Yuxin Wen
Wojtek Czaja
Micah Goldblum
Tom Goldstein
FedML
71
56
0
29 Jan 2022
When the Curious Abandon Honesty: Federated Learning Is Not Private
When the Curious Abandon Honesty: Federated Learning Is Not Private
Franziska Boenisch
Adam Dziedzic
R. Schuster
Ali Shahin Shamsabadi
Ilia Shumailov
Nicolas Papernot
FedML
AAML
69
181
0
06 Dec 2021
Opacus: User-Friendly Differential Privacy Library in PyTorch
Opacus: User-Friendly Differential Privacy Library in PyTorch
Ashkan Yousefpour
I. Shilov
Alexandre Sablayrolles
Davide Testuggine
Karthik Prasad
...
Sayan Gosh
Akash Bharadwaj
Jessica Zhao
Graham Cormode
Ilya Mironov
VLM
144
349
0
25 Sep 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
267
1,812
0
14 Dec 2020
Cryptanalytic Extraction of Neural Network Models
Cryptanalytic Extraction of Neural Network Models
Nicholas Carlini
Matthew Jagielski
Ilya Mironov
FedML
MLAU
MIACV
AAML
70
134
0
10 Mar 2020
1