ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.04716
  4. Cited By
Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial
  Training

Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training

9 February 2021
Lue Tao
Lei Feng
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
    AAML
ArXivPDFHTML

Papers citing "Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training"

13 / 13 papers shown
Title
Nonlinear Transformations Against Unlearnable Datasets
Nonlinear Transformations Against Unlearnable Datasets
T. Hapuarachchi
Jing Lin
Kaiqi Xiong
Mohamed Rahouti
Gitte Ost
26
1
0
05 Jun 2024
PureEBM: Universal Poison Purification via Mid-Run Dynamics of
  Energy-Based Models
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
18
0
0
28 May 2024
Effective and Robust Adversarial Training against Data and Label
  Corruptions
Effective and Robust Adversarial Training against Data and Label Corruptions
Pengfei Zhang
Zi Huang
Xin-Shun Xu
Guangdong Bai
37
4
0
07 May 2024
Purify Unlearnable Examples via Rate-Constrained Variational
  Autoencoders
Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders
Yi Yu
Yufei Wang
Song Xia
Wenhan Yang
Shijian Lu
Yap-Peng Tan
A.C. Kot
AAML
25
9
0
02 May 2024
CUDA: Convolution-based Unlearnable Datasets
CUDA: Convolution-based Unlearnable Datasets
Vinu Sankar Sadasivan
Mahdi Soltanolkotabi
S. Feizi
MU
14
23
0
07 Mar 2023
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Hao He
Kaiwen Zha
Dina Katabi
AAML
18
31
0
22 Feb 2022
On the Convergence and Robustness of Adversarial Training
On the Convergence and Robustness of Adversarial Training
Yisen Wang
Xingjun Ma
James Bailey
Jinfeng Yi
Bowen Zhou
Quanquan Gu
AAML
186
344
0
15 Dec 2021
Exploring Architectural Ingredients of Adversarially Robust Deep Neural
  Networks
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks
Hanxun Huang
Yisen Wang
S. Erfani
Quanquan Gu
James Bailey
Xingjun Ma
AAML
TPM
44
100
0
07 Oct 2021
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Nicholas Carlini
AAML
139
68
0
04 May 2021
Unlearnable Examples: Making Personal Data Unexploitable
Unlearnable Examples: Making Personal Data Unexploitable
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
Yisen Wang
MIACV
136
189
0
13 Jan 2021
Unadversarial Examples: Designing Objects for Robust Vision
Unadversarial Examples: Designing Objects for Robust Vision
Hadi Salman
Andrew Ilyas
Logan Engstrom
Sai H. Vemprala
A. Madry
Ashish Kapoor
WIGM
62
59
0
22 Dec 2020
On Evaluating Neural Network Backdoor Defenses
On Evaluating Neural Network Backdoor Defenses
A. Veldanda
S. Garg
AAML
11
8
0
23 Oct 2020
RobustBench: a standardized adversarial robustness benchmark
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
207
668
0
19 Oct 2020
1