Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2106.10807
Cited By
Adversarial Examples Make Strong Poisons
21 June 2021
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Adversarial Examples Make Strong Poisons"
50 / 96 papers shown
Title
MTL-UE: Learning to Learn Nothing for Multi-Task Learning
Yi Yu
Song Xia
Siyuan Yang
Chenqi Kong
Wenhan Yang
Shijian Lu
Yap-Peng Tan
Alex Chichung Kot
46
0
0
08 May 2025
The Ultimate Cookbook for Invisible Poison: Crafting Subtle Clean-Label Text Backdoors with Style Attributes
Wencong You
Daniel Lowd
34
0
0
24 Apr 2025
BridgePure: Limited Protection Leakage Can Break Black-Box Data Protection
Yihan Wang
Yiwei Lu
Xiao-Shan Gao
Gautam Kamath
Yaoliang Yu
34
0
0
30 Dec 2024
Deferred Poisoning: Making the Model More Vulnerable via Hessian Singularization
Yuhao He
Jinyu Tian
Xianwei Zheng
Li Dong
Yuanman Li
L. Zhang
AAML
21
0
0
06 Nov 2024
Enhancing Adversarial Robustness via Uncertainty-Aware Distributional Adversarial Training
Junhao Dong
Xinghua Qu
Zhiyuan Wang
Yew-Soon Ong
AAML
48
1
0
05 Nov 2024
Learning from Convolution-based Unlearnable Datasets
Dohyun Kim
Pedro Sandoval-Segura
MU
91
1
0
04 Nov 2024
UnSeg: One Universal Unlearnable Example Generator is Enough against All Image Segmentation
Ye Sun
Hao Zhang
Tiehua Zhang
Xingjun Ma
Yu-Gang Jiang
VLM
32
3
0
13 Oct 2024
S
4
^4
4
ST: A Strong, Self-transferable, faSt, and Simple Scale Transformation for Transferable Targeted Attack
Yongxiang Liu
Bowen Peng
Li Liu
X. Li
107
0
0
13 Oct 2024
Poison-splat: Computation Cost Attack on 3D Gaussian Splatting
Jiahao Lu
Yifan Zhang
Qiuhong Shen
Xinchao Wang
Shuicheng Yan
3DGS
39
1
0
10 Oct 2024
On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning
Yongyi Su
Yushu Li
Nanqing Liu
Kui Jia
Xulei Yang
Chuan-Sheng Foo
Xun Xu
TTA
AAML
56
0
0
07 Oct 2024
Unlearnable 3D Point Clouds: Class-wise Transformation Is All You Need
Xianlong Wang
Minghui Li
Wei Liu
Hangtao Zhang
Shengshan Hu
Yechao Zhang
Ziqi Zhou
Hai Jin
3DPC
MU
40
6
0
04 Oct 2024
Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective
Yixin Liu
Arielle Carr
Lichao Sun
AAML
23
0
0
01 Oct 2024
Exploiting Supervised Poison Vulnerability to Strengthen Self-Supervised Defense
Jeremy A. Styborski
Mingzhi Lyu
Y. Huang
Adams Kong
36
0
0
13 Sep 2024
Transfer-based Adversarial Poisoning Attacks for Online (MIMO-)Deep Receviers
Kunze Wu
Weiheng Jiang
Dusit Niyato
Yinghuan Li
Chuang Luo
AAML
35
0
0
04 Sep 2024
Unlearnable Examples Detection via Iterative Filtering
Yi Yu
Qichen Zheng
Siyuan Yang
Wenhan Yang
Jun Liu
Shijian Lu
Yap-Peng Tan
Kwok-Yan Lam
Alex Kot
AAML
30
1
0
15 Aug 2024
Towards Adversarial Robustness via Debiased High-Confidence Logit Alignment
Kejia Zhang
Juanjuan Weng
Zhiming Luo
Shaozi Li
AAML
29
0
0
12 Aug 2024
Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive Learning
Xinwei Liu
Xiaojun Jia
Yuan Xun
Siyuan Liang
Xiaochun Cao
34
7
0
23 Jul 2024
Toward Availability Attacks in 3D Point Clouds
Yifan Zhu
Yibo Miao
Yinpeng Dong
Xiao-Shan Gao
3DPC
AAML
40
3
0
26 Jun 2024
Semantic Deep Hiding for Robust Unlearnable Examples
Ruohan Meng
Chenyu Yi
Yi Yu
Siyuan Yang
Bingquan Shen
Alex C. Kot
41
5
0
25 Jun 2024
ECLIPSE: Expunging Clean-label Indiscriminate Poisons via Sparse Diffusion Purification
Xianlong Wang
Shengshan Hu
Yechao Zhang
Ziqi Zhou
Leo Yu Zhang
Peng Xu
Wei Wan
Hai Jin
AAML
39
3
0
21 Jun 2024
Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI
Robert Honig
Javier Rando
Nicholas Carlini
Florian Tramèr
WIGM
AAML
41
16
0
17 Jun 2024
Nonlinear Transformations Against Unlearnable Datasets
T. Hapuarachchi
Jing Lin
Kaiqi Xiong
Mohamed Rahouti
Gitte Ost
28
1
0
05 Jun 2024
Poisoning-based Backdoor Attacks for Arbitrary Target Label with Positive Triggers
Binxiao Huang
Jason Chun Lok Li
Chang Liu
Ngai Wong
AAML
33
0
0
09 May 2024
Effective and Robust Adversarial Training against Data and Label Corruptions
Pengfei Zhang
Zi Huang
Xin-Shun Xu
Guangdong Bai
43
4
0
07 May 2024
Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders
Yi Yu
Yufei Wang
Song Xia
Wenhan Yang
Shijian Lu
Yap-Peng Tan
A.C. Kot
AAML
29
10
0
02 May 2024
Ungeneralizable Examples
Jing Ye
Xinchao Wang
35
4
0
22 Apr 2024
Disguised Copyright Infringement of Latent Diffusion Models
Yiwei Lu
Matthew Y.R. Yang
Zuoqiu Liu
Gautam Kamath
Yaoliang Yu
WIGM
23
7
0
10 Apr 2024
Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion
Hossein Souri
Arpit Bansal
Hamid Kazemi
Liam H. Fowl
Aniruddha Saha
Jonas Geiping
Andrew Gordon Wilson
Rama Chellappa
Tom Goldstein
Micah Goldblum
SILM
DiffM
21
1
0
25 Mar 2024
Medical Unlearnable Examples: Securing Medical Data from Unauthorized Training via Sparsity-Aware Local Masking
Weixiang Sun
Yixin Liu
Zhiling Yan
Kaidi Xu
Lichao Sun
AAML
32
3
0
15 Mar 2024
Teach LLMs to Phish: Stealing Private Information from Language Models
Ashwinee Panda
Christopher A. Choquette-Choo
Zhengming Zhang
Yaoqing Yang
Prateek Mittal
PILM
32
20
0
01 Mar 2024
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors
Yiwei Lu
Matthew Y.R. Yang
Gautam Kamath
Yaoliang Yu
AAML
SILM
37
8
0
20 Feb 2024
Be Persistent: Towards a Unified Solution for Mitigating Shortcuts in Deep Learning
H. M. Dolatabadi
S. Erfani
Christopher Leckie
AAML
26
0
0
17 Feb 2024
Efficient Availability Attacks against Supervised and Contrastive Learning Simultaneously
Yihan Wang
Yifan Zhu
Xiao-Shan Gao
AAML
17
6
0
06 Feb 2024
Game-Theoretic Unlearnable Example Generator
Shuang Liu
Yihan Wang
Xiao-Shan Gao
AAML
24
7
0
31 Jan 2024
Exploring Adversarial Attacks against Latent Diffusion Model from the Perspective of Adversarial Transferability
Junxi Chen
Junhao Dong
Xiaohua Xie
AAML
DiffM
19
5
0
13 Jan 2024
Data-Dependent Stability Analysis of Adversarial Training
Yihan Wang
Shuang Liu
Xiao-Shan Gao
33
3
0
06 Jan 2024
PosCUDA: Position based Convolution for Unlearnable Audio Datasets
V. Gokul
Shlomo Dubnov
SSL
26
3
0
04 Jan 2024
Detection and Defense of Unlearnable Examples
Yifan Zhu
Lijia Yu
Xiao-Shan Gao
AAML
19
7
0
14 Dec 2023
Context Matters: Data-Efficient Augmentation of Large Language Models for Scientific Applications
Xiang Li
Haoran Tang
Siyu Chen
Ziwei Wang
Anurag Maravi
Marcin Abram
16
0
0
12 Dec 2023
Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image Transformations
Xianlong Wang
Shengshan Hu
Minghui Li
Zhifei Yu
Ziqi Zhou
Leo Yu Zhang
AAML
26
6
0
30 Nov 2023
Trainwreck: A damaging adversarial attack on image classifiers
Jan Zahálka
15
1
0
24 Nov 2023
Balance, Imbalance, and Rebalance: Understanding Robust Overfitting from a Minimax Game Perspective
Yifei Wang
Liangchen Li
Jiansheng Yang
Zhouchen Lin
Yisen Wang
23
11
0
30 Oct 2023
Large Language Models Are Better Adversaries: Exploring Generative Clean-Label Backdoor Attacks Against Text Classifiers
Wencong You
Zayd Hammoudeh
Daniel Lowd
AAML
21
12
0
28 Oct 2023
GraphCloak: Safeguarding Task-specific Knowledge within Graph-structured Data from Unauthorized Exploitation
Yixin Liu
Chenrui Fan
Xun Chen
Pan Zhou
Lichao Sun
48
4
0
11 Oct 2023
Transferable Availability Poisoning Attacks
Yiyong Liu
Michael Backes
Xiao Zhang
AAML
19
3
0
08 Oct 2023
Test-Time Poisoning Attacks Against Test-Time Adaptation Models
Tianshuo Cong
Xinlei He
Yun Shen
Yang Zhang
AAML
TTA
19
5
0
16 Aug 2023
APBench: A Unified Benchmark for Availability Poisoning Attacks and Defenses
Tianrui Qin
Xitong Gao
Juanjuan Zhao
Kejiang Ye
Chengjie Xu
AAML
16
6
0
07 Aug 2023
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?
Fnu Suya
X. Zhang
Yuan Tian
David E. Evans
OOD
AAML
19
2
0
03 Jul 2023
On the Exploitability of Instruction Tuning
Manli Shu
Jiong Wang
Chen Zhu
Jonas Geiping
Chaowei Xiao
Tom Goldstein
SILM
25
91
0
28 Jun 2023
Exploring Model Dynamics for Accumulative Poisoning Discovery
Jianing Zhu
Xiawei Guo
Jiangchao Yao
Chao Du
Li He
Shuo Yuan
Tongliang Liu
Liang Wang
Bo Han
AAML
16
0
0
06 Jun 2023
1
2
Next