ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.00898
  4. Cited By
Availability Attacks Create Shortcuts

Availability Attacks Create Shortcuts

1 November 2021
Da Yu
Huishuai Zhang
Wei Chen
Jian Yin
Tie-Yan Liu
    AAML
ArXivPDFHTML

Papers citing "Availability Attacks Create Shortcuts"

45 / 45 papers shown
Title
MTL-UE: Learning to Learn Nothing for Multi-Task Learning
MTL-UE: Learning to Learn Nothing for Multi-Task Learning
Yi Yu
Song Xia
Siyuan Yang
Chenqi Kong
Wenhan Yang
Shijian Lu
Yap-Peng Tan
Alex Chichung Kot
46
0
0
08 May 2025
BridgePure: Limited Protection Leakage Can Break Black-Box Data Protection
BridgePure: Limited Protection Leakage Can Break Black-Box Data Protection
Yihan Wang
Yiwei Lu
Xiao-Shan Gao
Gautam Kamath
Yaoliang Yu
34
0
0
30 Dec 2024
Deferred Poisoning: Making the Model More Vulnerable via Hessian
  Singularization
Deferred Poisoning: Making the Model More Vulnerable via Hessian Singularization
Yuhao He
Jinyu Tian
Xianwei Zheng
Li Dong
Yuanman Li
L. Zhang
AAML
23
0
0
06 Nov 2024
Learning from Convolution-based Unlearnable Datasets
Learning from Convolution-based Unlearnable Datasets
Dohyun Kim
Pedro Sandoval-Segura
MU
91
1
0
04 Nov 2024
Adversarial Training: A Survey
Adversarial Training: A Survey
Mengnan Zhao
Lihe Zhang
Jingwen Ye
Huchuan Lu
Baocai Yin
Xinchao Wang
AAML
28
0
0
19 Oct 2024
UnSeg: One Universal Unlearnable Example Generator is Enough against All
  Image Segmentation
UnSeg: One Universal Unlearnable Example Generator is Enough against All Image Segmentation
Ye Sun
Hao Zhang
Tiehua Zhang
Xingjun Ma
Yu-Gang Jiang
VLM
32
3
0
13 Oct 2024
Empirical Perturbation Analysis of Linear System Solvers from a Data
  Poisoning Perspective
Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective
Yixin Liu
Arielle Carr
Lichao Sun
AAML
23
0
0
01 Oct 2024
Exploiting Supervised Poison Vulnerability to Strengthen Self-Supervised
  Defense
Exploiting Supervised Poison Vulnerability to Strengthen Self-Supervised Defense
Jeremy A. Styborski
Mingzhi Lyu
Y. Huang
Adams Kong
36
0
0
13 Sep 2024
Unlearnable Examples Detection via Iterative Filtering
Unlearnable Examples Detection via Iterative Filtering
Yi Yu
Qichen Zheng
Siyuan Yang
Wenhan Yang
Jun Liu
Shijian Lu
Yap-Peng Tan
Kwok-Yan Lam
Alex Kot
AAML
32
1
0
15 Aug 2024
Multimodal Unlearnable Examples: Protecting Data against Multimodal
  Contrastive Learning
Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive Learning
Xinwei Liu
Xiaojun Jia
Yuan Xun
Siyuan Liang
Xiaochun Cao
34
7
0
23 Jul 2024
Toward Availability Attacks in 3D Point Clouds
Toward Availability Attacks in 3D Point Clouds
Yifan Zhu
Yibo Miao
Yinpeng Dong
Xiao-Shan Gao
3DPC
AAML
40
3
0
26 Jun 2024
Semantic Deep Hiding for Robust Unlearnable Examples
Semantic Deep Hiding for Robust Unlearnable Examples
Ruohan Meng
Chenyu Yi
Yi Yu
Siyuan Yang
Bingquan Shen
Alex C. Kot
41
5
0
25 Jun 2024
ECLIPSE: Expunging Clean-label Indiscriminate Poisons via Sparse
  Diffusion Purification
ECLIPSE: Expunging Clean-label Indiscriminate Poisons via Sparse Diffusion Purification
Xianlong Wang
Shengshan Hu
Yechao Zhang
Ziqi Zhou
Leo Yu Zhang
Peng Xu
Wei Wan
Hai Jin
AAML
39
3
0
21 Jun 2024
PureGen: Universal Data Purification for Train-Time Poison Defense via
  Generative Model Dynamics
PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics
Sunay Bhat
Jeffrey Q. Jiang
Omead Brandon Pooladzandi
Alexander Branch
Gregory Pottie
AAML
28
2
0
28 May 2024
Effective and Robust Adversarial Training against Data and Label
  Corruptions
Effective and Robust Adversarial Training against Data and Label Corruptions
Pengfei Zhang
Zi Huang
Xin-Shun Xu
Guangdong Bai
43
4
0
07 May 2024
Purify Unlearnable Examples via Rate-Constrained Variational
  Autoencoders
Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders
Yi Yu
Yufei Wang
Song Xia
Wenhan Yang
Shijian Lu
Yap-Peng Tan
A.C. Kot
AAML
29
10
0
02 May 2024
Disguised Copyright Infringement of Latent Diffusion Models
Disguised Copyright Infringement of Latent Diffusion Models
Yiwei Lu
Matthew Y.R. Yang
Zuoqiu Liu
Gautam Kamath
Yaoliang Yu
WIGM
23
7
0
10 Apr 2024
Medical Unlearnable Examples: Securing Medical Data from Unauthorized
  Training via Sparsity-Aware Local Masking
Medical Unlearnable Examples: Securing Medical Data from Unauthorized Training via Sparsity-Aware Local Masking
Weixiang Sun
Yixin Liu
Zhiling Yan
Kaidi Xu
Lichao Sun
AAML
32
3
0
15 Mar 2024
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors
Yiwei Lu
Matthew Y.R. Yang
Gautam Kamath
Yaoliang Yu
AAML
SILM
37
8
0
20 Feb 2024
Be Persistent: Towards a Unified Solution for Mitigating Shortcuts in
  Deep Learning
Be Persistent: Towards a Unified Solution for Mitigating Shortcuts in Deep Learning
H. M. Dolatabadi
S. Erfani
Christopher Leckie
AAML
29
0
0
17 Feb 2024
Efficient Availability Attacks against Supervised and Contrastive
  Learning Simultaneously
Efficient Availability Attacks against Supervised and Contrastive Learning Simultaneously
Yihan Wang
Yifan Zhu
Xiao-Shan Gao
AAML
17
6
0
06 Feb 2024
Data-Dependent Stability Analysis of Adversarial Training
Data-Dependent Stability Analysis of Adversarial Training
Yihan Wang
Shuang Liu
Xiao-Shan Gao
33
3
0
06 Jan 2024
Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image
  Transformations
Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image Transformations
Xianlong Wang
Shengshan Hu
Minghui Li
Zhifei Yu
Ziqi Zhou
Leo Yu Zhang
AAML
26
6
0
30 Nov 2023
MetaCloak: Preventing Unauthorized Subject-driven Text-to-image
  Diffusion-based Synthesis via Meta-learning
MetaCloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning
Yixin Liu
Chenrui Fan
Yutong Dai
Xun Chen
Pan Zhou
Lichao Sun
DiffM
24
19
0
22 Nov 2023
Protecting Publicly Available Data With Machine Learning Shortcuts
Protecting Publicly Available Data With Machine Learning Shortcuts
Nicolas M. Muller
Maximilian Burgert
Pascal Debus
Jennifer Williams
Philip Sperl
Konstantin Böttinger
8
0
0
30 Oct 2023
Segue: Side-information Guided Generative Unlearnable Examples for
  Facial Privacy Protection in Real World
Segue: Side-information Guided Generative Unlearnable Examples for Facial Privacy Protection in Real World
Zhiling Zhang
Jie Zhang
Kui Zhang
Wenbo Zhou
Weiming Zhang
Neng H. Yu
18
1
0
24 Oct 2023
Transferable Availability Poisoning Attacks
Transferable Availability Poisoning Attacks
Yiyong Liu
Michael Backes
Xiao Zhang
AAML
19
3
0
08 Oct 2023
A Survey of Robustness and Safety of 2D and 3D Deep Learning Models
  Against Adversarial Attacks
A Survey of Robustness and Safety of 2D and 3D Deep Learning Models Against Adversarial Attacks
Yanjie Li
Bin Xie
Songtao Guo
Yuanyuan Yang
Bin Xiao
AAML
30
15
0
01 Oct 2023
APBench: A Unified Benchmark for Availability Poisoning Attacks and
  Defenses
APBench: A Unified Benchmark for Availability Poisoning Attacks and Defenses
Tianrui Qin
Xitong Gao
Juanjuan Zhao
Kejiang Ye
Chengjie Xu
AAML
16
6
0
07 Aug 2023
Unlearnable Examples for Diffusion Models: Protect Data from
  Unauthorized Exploitation
Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation
Zhengyue Zhao
Jinhao Duan
Xingui Hu
Kaidi Xu
Chenan Wang
Rui Zhang
Zidong Du
Qi Guo
Yunji Chen
DiffM
WIGM
22
27
0
02 Jun 2023
What Can We Learn from Unlearnable Datasets?
What Can We Learn from Unlearnable Datasets?
Pedro Sandoval-Segura
Vasu Singla
Jonas Geiping
Micah Goldblum
Tom Goldstein
11
14
0
30 May 2023
Sharpness-Aware Data Poisoning Attack
Sharpness-Aware Data Poisoning Attack
Pengfei He
Han Xu
J. Ren
Yingqian Cui
Hui Liu
Charu C. Aggarwal
Jiliang Tang
AAML
34
7
0
24 May 2023
Towards Generalizable Data Protection With Transferable Unlearnable
  Examples
Towards Generalizable Data Protection With Transferable Unlearnable Examples
Bin Fang
Bo-wen Li
Shuang Wu
Tianyi Zheng
Shouhong Ding
Ran Yi
Lizhuang Ma
11
4
0
18 May 2023
Re-thinking Data Availablity Attacks Against Deep Neural Networks
Re-thinking Data Availablity Attacks Against Deep Neural Networks
Bin Fang
Bo-wen Li
Shuang Wu
Ran Yi
Shouhong Ding
Lizhuang Ma
AAML
35
0
0
18 May 2023
Unlearnable Examples Give a False Sense of Security: Piercing through
  Unexploitable Data with Learnable Examples
Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples
Wanzhu Jiang
Yunfeng Diao
He-Nan Wang
Jianxin Sun
M. Wang
Richang Hong
37
18
0
16 May 2023
Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable
  Example Attacks
Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example Attacks
Tianrui Qin
Xitong Gao
Juanjuan Zhao
Kejiang Ye
Chengzhong Xu
AAML
MU
32
27
0
27 Mar 2023
The Devil's Advocate: Shattering the Illusion of Unexploitable Data
  using Diffusion Models
The Devil's Advocate: Shattering the Illusion of Unexploitable Data using Diffusion Models
H. M. Dolatabadi
S. Erfani
C. Leckie
DiffM
38
17
0
15 Mar 2023
CUDA: Convolution-based Unlearnable Datasets
CUDA: Convolution-based Unlearnable Datasets
Vinu Sankar Sadasivan
Mahdi Soltanolkotabi
S. Feizi
MU
29
23
0
07 Mar 2023
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning
  Attacks
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
37
18
0
07 Mar 2023
Image Shortcut Squeezing: Countering Perturbative Availability Poisons
  with Compression
Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression
Zhuoran Liu
Zhengyu Zhao
Martha Larson
24
34
0
31 Jan 2023
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Jiaming Zhang
Xingjun Ma
Qiaomin Yi
Jitao Sang
Yugang Jiang
Yaowei Wang
Changsheng Xu
15
24
0
31 Dec 2022
Indiscriminate Data Poisoning Attacks on Neural Networks
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
28
24
0
19 Apr 2022
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Hao He
Kaiwen Zha
Dina Katabi
AAML
31
32
0
22 Feb 2022
Unlearnable Examples: Making Personal Data Unexploitable
Unlearnable Examples: Making Personal Data Unexploitable
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
Yisen Wang
MIACV
136
190
0
13 Jan 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
290
1,812
0
14 Dec 2020
1