Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2106.09667
Cited By
Poisoning and Backdooring Contrastive Learning
17 June 2021
Nicholas Carlini
Andreas Terzis
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Poisoning and Backdooring Contrastive Learning"
50 / 118 papers shown
Title
Trustworthy Large Models in Vision: A Survey
Ziyan Guo
Li Xu
Jun Liu
MU
58
0
0
16 Nov 2023
Defending Our Privacy With Backdoors
Dominik Hintersdorf
Lukas Struppek
Daniel Neider
Kristian Kersting
SILM
AAML
18
2
0
12 Oct 2023
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor Attacks
Wenhan Yang
Jingdong Gao
Baharan Mirzasoleiman
VLM
24
6
0
05 Oct 2023
Towards Stable Backdoor Purification through Feature Shift Tuning
Rui Min
Zeyu Qin
Li Shen
Minhao Cheng
AAML
40
21
0
03 Oct 2023
GhostEncoder: Stealthy Backdoor Attacks with Dynamic Triggers to Pre-trained Encoders in Self-supervised Learning
Qiannan Wang
Changchun Yin
Zhe Liu
Liming Fang
Run Wang
Chenhao Lin
AAML
30
4
0
01 Oct 2023
Privacy Preservation in Artificial Intelligence and Extended Reality (AI-XR) Metaverses: A Survey
Mahdi Alkaeed
Adnan Qayyum
Junaid Qadir
26
16
0
19 Sep 2023
BAGEL: Backdoor Attacks against Federated Contrastive Learning
Yao Huang
Kongyang Chen
Jiannong Cao
Jiaxing Shen
Shaowei Wang
Yun Peng
Weilong Peng
Kechao Cai
FedML
27
3
0
14 Sep 2023
Identifying and Mitigating the Security Risks of Generative AI
Clark W. Barrett
Bradley L Boyd
Ellie Burzstein
Nicholas Carlini
Brad Chen
...
Zulfikar Ramzan
Khawaja Shams
D. Song
Ankur Taly
Diyi Yang
SILM
29
91
0
28 Aug 2023
Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Models
Dominik Hintersdorf
Lukas Struppek
Kristian Kersting
SILM
20
4
0
18 Aug 2023
Test-Time Poisoning Attacks Against Test-Time Adaptation Models
Tianshuo Cong
Xinlei He
Yun Shen
Yang Zhang
AAML
TTA
24
5
0
16 Aug 2023
SSL-Auth: An Authentication Framework by Fragile Watermarking for Pre-trained Encoders in Self-supervised Learning
Xiaobei Li
Changchun Yin
Liyue Zhu
Xiaogang Xu
Liming Fang
Run Wang
Chenhao Lin
AAML
20
0
0
09 Aug 2023
Downstream-agnostic Adversarial Examples
Ziqi Zhou
Shengshan Hu
Rui-Qing Zhao
Qian Wang
L. Zhang
Junhui Hou
Hai Jin
SILM
AAML
21
24
0
23 Jul 2023
NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
Kai Mei
Zheng Li
Zhenting Wang
Yang Zhang
Shiqing Ma
AAML
SILM
34
48
0
28 May 2023
The Curse of Recursion: Training on Generated Data Makes Models Forget
Ilia Shumailov
Zakhar Shumaylov
Yiren Zhao
Y. Gal
Nicolas Papernot
Ross J. Anderson
DiffM
23
280
0
27 May 2023
Mitigating Backdoor Poisoning Attacks through the Lens of Spurious Correlation
Xuanli He
Qiongkai Xu
Jun Wang
Benjamin I. P. Rubinstein
Trevor Cohn
AAML
29
18
0
19 May 2023
Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks
Nils Lukas
Florian Kerschbaum
28
1
0
07 May 2023
Assessing Vulnerabilities of Adversarial Learning Algorithm through Poisoning Attacks
Jingfeng Zhang
Bo Song
Bo Han
Lei Liu
Gang Niu
Masashi Sugiyama
AAML
16
2
0
30 Apr 2023
RoCOCO: Robustness Benchmark of MS-COCO to Stress-test Image-Text Matching Models
Seulki Park
Daeho Um
Hajung Yoon
Sanghyuk Chun
Sangdoo Yun
Jin Young Choi
35
2
0
21 Apr 2023
Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Ajinkya Tejankar
Maziar Sanjabi
Qifan Wang
Sinong Wang
Hamed Firooz
Hamed Pirsiavash
L Tan
AAML
30
19
0
04 Apr 2023
Detecting Backdoors in Pre-trained Encoders
Shiwei Feng
Guanhong Tao
Shuyang Cheng
Guangyu Shen
Xiangzhe Xu
Yingqi Liu
Kaiyuan Zhang
Shiqing Ma
Xiangyu Zhang
76
47
0
23 Mar 2023
Black-box Backdoor Defense via Zero-shot Image Purification
Yucheng Shi
Mengnan Du
Xuansheng Wu
Zihan Guan
Jin Sun
Ninghao Liu
40
28
0
21 Mar 2023
Robust Contrastive Language-Image Pre-training against Data Poisoning and Backdoor Attacks
Wenhan Yang
Jingdong Gao
Baharan Mirzasoleiman
VLM
102
18
0
13 Mar 2023
Backdoor Defense via Deconfounded Representation Learning
Zaixin Zhang
Qi Liu
Zhicai Wang
Zepu Lu
Qingyong Hu
AAML
49
39
0
13 Mar 2023
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning
Hritik Bansal
Nishad Singhi
Yu Yang
Fan Yin
Aditya Grover
Kai-Wei Chang
AAML
34
42
0
06 Mar 2023
Single Image Backdoor Inversion via Robust Smoothed Classifiers
Mingjie Sun
Zico Kolter
AAML
8
12
0
01 Mar 2023
ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms
Minzhou Pan
Yi Zeng
Lingjuan Lyu
X. Lin
R. Jia
AAML
26
35
0
22 Feb 2023
Poisoning Web-Scale Training Datasets is Practical
Nicholas Carlini
Matthew Jagielski
Christopher A. Choquette-Choo
Daniel Paleka
Will Pearce
Hyrum S. Anderson
Andreas Terzis
Kurt Thomas
Florian Tramèr
SILM
31
182
0
20 Feb 2023
Temporal Robustness against Data Poisoning
Wenxiao Wang
S. Feizi
AAML
OOD
30
11
0
07 Feb 2023
Uncovering Adversarial Risks of Test-Time Adaptation
Tong Wu
Feiran Jia
Xiangyu Qi
Jiachen T. Wang
Vikash Sehwag
Saeed Mahloujifar
Prateek Mittal
AAML
TTA
29
9
0
29 Jan 2023
A Study on FGSM Adversarial Training for Neural Retrieval
Simon Lupart
S. Clinchant
AAML
24
7
0
25 Jan 2023
REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SILM
AAML
21
4
0
07 Jan 2023
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
34
27
0
03 Jan 2023
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
28
6
0
06 Dec 2022
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning
Jinghuai Zhang
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
AAML
35
20
0
15 Nov 2022
ContextCLIP: Contextual Alignment of Image-Text pairs on CLIP visual representations
Chanda Grover
Indra Deep Mastan
Debayan Gupta
VLM
CLIP
11
4
0
14 Nov 2022
Rickrolling the Artist: Injecting Backdoors into Text Encoders for Text-to-Image Synthesis
Lukas Struppek
Dominik Hintersdorf
Kristian Kersting
SILM
22
36
0
04 Nov 2022
Apple of Sodom: Hidden Backdoors in Superior Sentence Embeddings via Contrastive Learning
Xiaoyi Chen
Baisong Xin
Shengfang Zhai
Shiqing Ma
Qingni Shen
Zhonghai Wu
SILM
17
2
0
20 Oct 2022
An Embarrassingly Simple Backdoor Attack on Self-supervised Learning
Changjiang Li
Ren Pang
Zhaohan Xi
Tianyu Du
S. Ji
Yuan Yao
Ting Wang
AAML
25
25
0
13 Oct 2022
How to Sift Out a Clean Data Subset in the Presence of Data Poisoning?
Yi Zeng
Minzhou Pan
Himanshu Jahagirdar
Ming Jin
Lingjuan Lyu
R. Jia
AAML
36
21
0
12 Oct 2022
Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork
Haotao Wang
Junyuan Hong
Aston Zhang
Jiayu Zhou
Zhangyang Wang
AAML
36
12
0
12 Oct 2022
Backdoor Attacks in the Supply Chain of Masked Image Modeling
Xinyue Shen
Xinlei He
Zheng Li
Yun Shen
Michael Backes
Yang Zhang
38
7
0
04 Oct 2022
Data Poisoning Attacks Against Multimodal Encoders
Ziqing Yang
Xinlei He
Zheng Li
Michael Backes
Mathias Humbert
Pascal Berrang
Yang Zhang
AAML
110
45
0
30 Sep 2022
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection
Yiming Li
Yang Bai
Yong Jiang
Yong-Liang Yang
Shutao Xia
Bo Li
AAML
45
97
0
27 Sep 2022
Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis
Lukas Struppek
Dominik Hintersdorf
Felix Friedrich
Manuel Brack
P. Schramowski
Kristian Kersting
73
26
0
19 Sep 2022
Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain
Chang Yue
Peizhuo Lv
Ruigang Liang
Kai Chen
AAML
26
10
0
09 Jul 2022
CyCLIP: Cyclic Contrastive Language-Image Pretraining
Shashank Goel
Hritik Bansal
S. Bhatia
Ryan A. Rossi
Vishwa Vinay
Aditya Grover
CLIP
VLM
173
132
0
28 May 2022
BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning
Zhenting Wang
Juan Zhai
Shiqing Ma
AAML
126
97
0
26 May 2022
On Trace of PGD-Like Adversarial Attacks
Mo Zhou
Vishal M. Patel
AAML
27
4
0
19 May 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
25
34
0
13 May 2022
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Hao He
Kaiwen Zha
Dina Katabi
AAML
31
32
0
22 Feb 2022
Previous
1
2
3
Next