ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.00352
  4. Cited By
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised
  Learning

BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning

1 August 2021
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
    SILM
    SSL
ArXivPDFHTML

Papers citing "BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning"

32 / 32 papers shown
Title
X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP
X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP
Hanxun Huang
Sarah Monazam Erfani
Yige Li
Xingjun Ma
James Bailey
AAML
34
0
0
08 May 2025
Protocol-agnostic and Data-free Backdoor Attacks on Pre-trained Models in RF Fingerprinting
Protocol-agnostic and Data-free Backdoor Attacks on Pre-trained Models in RF Fingerprinting
Tianya Zhao
Ningning Wang
Junqing Zhang
Xuyu Wang
AAML
43
0
0
01 May 2025
DeDe: Detecting Backdoor Samples for SSL Encoders via Decoders
DeDe: Detecting Backdoor Samples for SSL Encoders via Decoders
Sizai Hou
Songze Li
Duanyi Yao
AAML
70
0
0
25 Nov 2024
Backdooring Vision-Language Models with Out-Of-Distribution Data
Backdooring Vision-Language Models with Out-Of-Distribution Data
Weimin Lyu
Jiachen Yao
Saumya Gupta
Lu Pang
Tao Sun
Lingjie Yi
Lijie Hu
Haibin Ling
Chao Chen
VLM
AAML
57
2
0
02 Oct 2024
Membership Inference Attack Against Masked Image Modeling
Membership Inference Attack Against Masked Image Modeling
Z. Li
Xinlei He
Ning Yu
Yang Zhang
40
1
0
13 Aug 2024
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Hongbin Liu
Michael K. Reiter
Neil Zhenqiang Gong
AAML
26
2
0
22 Feb 2024
On the Difficulty of Defending Contrastive Learning against Backdoor
  Attacks
On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Changjiang Li
Ren Pang
Bochuan Cao
Zhaohan Xi
Jinghui Chen
Shouling Ji
Ting Wang
AAML
32
6
0
14 Dec 2023
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Guangjing Wang
Ce Zhou
Yuanda Wang
Bocheng Chen
Hanqing Guo
Qiben Yan
AAML
SILM
53
3
0
20 Nov 2023
Adversarial Illusions in Multi-Modal Embeddings
Adversarial Illusions in Multi-Modal Embeddings
Tingwei Zhang
Rishi Jha
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
24
8
0
22 Aug 2023
DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via
  Restricted Adversarial Distillation
DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via Restricted Adversarial Distillation
Zhicong Yan
Shenghong Li
Ruijie Zhao
Yuan Tian
Yuanyuan Zhao
AAML
32
11
0
13 Jun 2023
NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
Kai Mei
Zheng Li
Zhenting Wang
Yang Zhang
Shiqing Ma
AAML
SILM
19
48
0
28 May 2023
Text-to-Image Diffusion Models can be Easily Backdoored through
  Multimodal Data Poisoning
Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning
Shengfang Zhai
Yinpeng Dong
Qingni Shen
Shih-Chieh Pu
Yuejian Fang
Hang Su
30
70
0
07 May 2023
Defense-Prefix for Preventing Typographic Attacks on CLIP
Defense-Prefix for Preventing Typographic Attacks on CLIP
Hiroki Azuma
Yusuke Matsui
VLM
AAML
18
16
0
10 Apr 2023
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive
  Learning
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning
Hritik Bansal
Nishad Singhi
Yu Yang
Fan Yin
Aditya Grover
Kai-Wei Chang
AAML
29
41
0
06 Mar 2023
Prompt Stealing Attacks Against Text-to-Image Generation Models
Prompt Stealing Attacks Against Text-to-Image Generation Models
Xinyue Shen
Y. Qu
Michael Backes
Yang Zhang
22
31
0
20 Feb 2023
Backdoor Attacks to Pre-trained Unified Foundation Models
Backdoor Attacks to Pre-trained Unified Foundation Models
Zenghui Yuan
Yixin Liu
Kai Zhang
Pan Zhou
Lichao Sun
AAML
22
10
0
18 Feb 2023
REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust
  Encoder as a Service
REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SILM
AAML
21
4
0
07 Jan 2023
Backdoor Attacks Against Dataset Distillation
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
24
27
0
03 Jan 2023
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Zeyang Sha
Xinlei He
Pascal Berrang
Mathias Humbert
Yang Zhang
AAML
8
33
0
18 Dec 2022
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive
  Learning
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning
Jinghuai Zhang
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
AAML
23
20
0
15 Nov 2022
Private, Efficient, and Accurate: Protecting Models Trained by
  Multi-party Learning with Differential Privacy
Private, Efficient, and Accurate: Protecting Models Trained by Multi-party Learning with Differential Privacy
Wenqiang Ruan
Ming Xu
Wenjing Fang
Li Wang
Lei Wang
Wei Han
27
12
0
18 Aug 2022
AWEncoder: Adversarial Watermarking Pre-trained Encoders in Contrastive
  Learning
AWEncoder: Adversarial Watermarking Pre-trained Encoders in Contrastive Learning
Tianxing Zhang
Hanzhou Wu
Xiaofeng Lu
Guangling Sun
AAML
18
4
0
08 Aug 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
  Contrastive Learning
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
25
34
0
13 May 2022
Backdooring Explainable Machine Learning
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
14
5
0
20 Apr 2022
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Hao He
Kaiwen Zha
Dina Katabi
AAML
26
32
0
22 Feb 2022
Backdoor Defense via Decoupling the Training Process
Backdoor Defense via Decoupling the Training Process
Kunzhe Huang
Yiming Li
Baoyuan Wu
Zhan Qin
Kui Ren
AAML
FedML
11
184
0
05 Feb 2022
Watermarking Pre-trained Encoders in Contrastive Learning
Watermarking Pre-trained Encoders in Contrastive Learning
Yutong Wu
Han Qiu
Tianwei Zhang
L. Jiwei
M. Qiu
18
9
0
20 Jan 2022
Spinning Language Models: Risks of Propaganda-As-A-Service and
  Countermeasures
Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures
Eugene Bagdasaryan
Vitaly Shmatikov
SILM
AAML
14
75
0
09 Dec 2021
FooBaR: Fault Fooling Backdoor Attack on Neural Network Training
FooBaR: Fault Fooling Backdoor Attack on Neural Network Training
J. Breier
Xiaolu Hou
Martín Ochoa
Jesus Solano
SILM
AAML
31
8
0
23 Sep 2021
Dynamic Backdoor Attacks Against Machine Learning Models
Dynamic Backdoor Attacks Against Machine Learning Models
A. Salem
Rui Wen
Michael Backes
Shiqing Ma
Yang Zhang
AAML
16
269
0
07 Mar 2020
Model-Reuse Attacks on Deep Learning Systems
Model-Reuse Attacks on Deep Learning Systems
Yujie Ji
Xinyang Zhang
S. Ji
Xiapu Luo
Ting Wang
SILM
AAML
123
186
0
02 Dec 2018
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
168
286
0
02 Dec 2018
1