ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.01208
  4. Cited By
Mind the box: $l_1$-APGD for sparse adversarial attacks on image
  classifiers
v1v2v3 (latest)

Mind the box: l1l_1l1​-APGD for sparse adversarial attacks on image classifiers

International Conference on Machine Learning (ICML), 2021
1 March 2021
Francesco Croce
Matthias Hein
    AAML
ArXiv (abs)PDFHTMLGithub (25★)

Papers citing "Mind the box: $l_1$-APGD for sparse adversarial attacks on image classifiers"

37 / 37 papers shown
Angular Gradient Sign Method: Uncovering Vulnerabilities in Hyperbolic Networks
Angular Gradient Sign Method: Uncovering Vulnerabilities in Hyperbolic Networks
Minsoo Jo
Dongyoon Yang
Taesup Kim
AAML
234
0
0
17 Nov 2025
When Flatness Does (Not) Guarantee Adversarial Robustness
When Flatness Does (Not) Guarantee Adversarial Robustness
Nils Philipp Walter
Linara Adilova
Jilles Vreeken
Michael Kamp
202
4
0
16 Oct 2025
Structured Universal Adversarial Attacks on Object Detection for Video Sequences
Structured Universal Adversarial Attacks on Object Detection for Video Sequences
Sven Jacob
Weijia Shao
Gjergji Kasneci
AAML
136
0
0
16 Oct 2025
MAIA: An Inpainting-Based Approach for Music Adversarial Attacks
MAIA: An Inpainting-Based Approach for Music Adversarial Attacks
Yuxuan Liu
Peihong Zhang
Rui Sang
Zhixin li
Shengchen Li
AAML
157
2
0
05 Sep 2025
Fast Adversarial Training against Sparse Attacks Requires Loss Smoothing
Fast Adversarial Training against Sparse Attacks Requires Loss Smoothing
Xuyang Zhong
Yixiao Huang
Chen Liu
AAML
478
1
0
28 Feb 2025
FAIR-TAT: Improving Model Fairness Using Targeted Adversarial Training
FAIR-TAT: Improving Model Fairness Using Targeted Adversarial TrainingIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2024
Tejaswini Medi
Steffen Jung
Margret Keuper
AAML
522
5
0
30 Oct 2024
Unrevealed Threats: A Comprehensive Study of the Adversarial Robustness
  of Underwater Image Enhancement Models
Unrevealed Threats: A Comprehensive Study of the Adversarial Robustness of Underwater Image Enhancement Models
Siyu Zhai
Zhibo He
Xiaofeng Cong
Junming Hou
Jie Gui
Jian Wei You
Xin Gong
James Tin-Yau Kwok
Yuan Yan Tang
AAML
183
0
0
10 Sep 2024
The Uncanny Valley: Exploring Adversarial Robustness from a Flatness Perspective
The Uncanny Valley: Exploring Adversarial Robustness from a Flatness Perspective
Nils Philipp Walter
Linara Adilova
Jilles Vreeken
Michael Kamp
AAML
330
3
0
27 May 2024
Boosting Few-Pixel Robustness Verification via Covering Verification
  Designs
Boosting Few-Pixel Robustness Verification via Covering Verification DesignsInternational Conference on Computer Aided Verification (CAV), 2024
Yuval Shapira
Naor Wiesel
Shahar Shabelman
Dana Drachsler-Cohen
AAML
396
1
0
17 May 2024
Sparse-PGD: A Unified Framework for Sparse Adversarial Perturbations Generation
Sparse-PGD: A Unified Framework for Sparse Adversarial Perturbations GenerationIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024
Xuyang Zhong
Yixiao Huang
AAML
484
0
0
08 May 2024
AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples
AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples
Antonio Emanuele Cinà
Jérôme Rony
Maura Pintor
Christian Scano
Ambra Demontis
Battista Biggio
Ismail Ben Ayed
Fabio Roli
ELMAAMLSILM
587
22
0
30 Apr 2024
BruSLeAttack: A Query-Efficient Score-Based Black-Box Sparse Adversarial
  Attack
BruSLeAttack: A Query-Efficient Score-Based Black-Box Sparse Adversarial Attack
Viet Vo
Ehsan Abbasnejad
Damith C. Ranasinghe
AAML
419
14
0
08 Apr 2024
On Robustness and Generalization of ML-Based Congestion Predictors to
  Valid and Imperceptible Perturbations
On Robustness and Generalization of ML-Based Congestion Predictors to Valid and Imperceptible Perturbations
Chester Holtz
Yucheng Wang
Chung-Kuan Cheng
Bill Lin
AAMLOOD
184
0
0
29 Feb 2024
On the Duality Between Sharpness-Aware Minimization and Adversarial
  Training
On the Duality Between Sharpness-Aware Minimization and Adversarial Training
Yihao Zhang
Hangzhou He
Jingyu Zhu
Huanran Chen
Yifei Wang
Zeming Wei
AAML
441
29
0
23 Feb 2024
Where and How to Attack? A Causality-Inspired Recipe for Generating
  Counterfactual Adversarial Examples
Where and How to Attack? A Causality-Inspired Recipe for Generating Counterfactual Adversarial Examples
Ruichu Cai
Yuxuan Zhu
Jie Qiao
Zefeng Liang
Furui Liu
Zhifeng Hao
CML
466
7
0
21 Dec 2023
LipSim: A Provably Robust Perceptual Similarity Metric
LipSim: A Provably Robust Perceptual Similarity MetricInternational Conference on Learning Representations (ICLR), 2023
Sara Ghazanfari
Alexandre Araujo
Prashanth Krishnamurthy
Farshad Khorrami
Siddharth Garg
391
14
0
27 Oct 2023
On Continuity of Robust and Accurate Classifiers
On Continuity of Robust and Accurate Classifiers
Ramin Barati
Reza Safabakhsh
Mohammad Rahmati
AAML
441
1
0
29 Sep 2023
R-LPIPS: An Adversarially Robust Perceptual Similarity Metric
R-LPIPS: An Adversarially Robust Perceptual Similarity Metric
Sara Ghazanfari
S. Garg
Prashanth Krishnamurthy
Farshad Khorrami
Alexandre Araujo
317
42
0
27 Jul 2023
Fix your downsampling ASAP! Be natively more robust via Aliasing and Spectral Artifact free Pooling
Fix your downsampling ASAP! Be natively more robust via Aliasing and Spectral Artifact free Pooling
Julia Grabinski
Steffen Jung
J. Keuper
Margret Keuper
AAML
357
9
0
19 Jul 2023
Towards Reliable Evaluation and Fast Training of Robust Semantic
  Segmentation Models
Towards Reliable Evaluation and Fast Training of Robust Semantic Segmentation ModelsEuropean Conference on Computer Vision (ECCV), 2023
Francesco Croce
Naman D. Singh
Matthias Hein
VLM
262
13
0
22 Jun 2023
Towards Better Certified Segmentation via Diffusion Models
Towards Better Certified Segmentation via Diffusion ModelsConference on Uncertainty in Artificial Intelligence (UAI), 2023
Othmane Laousy
Alexandre Araujo
G. Chassagnon
M. Revel
S. Garg
Farshad Khorrami
Maria Vakalopoulou
DiffM
293
3
0
16 Jun 2023
The Best Defense is a Good Offense: Adversarial Augmentation against
  Adversarial Attacks
The Best Defense is a Good Offense: Adversarial Augmentation against Adversarial AttacksComputer Vision and Pattern Recognition (CVPR), 2023
I. Frosio
Jan Kautz
AAML
328
30
0
23 May 2023
Optimization and Optimizers for Adversarial Robustness
Optimization and Optimizers for Adversarial Robustness
Hengyue Liang
Buyun Liang
Le Peng
Ying Cui
Tim Mitchell
Ju Sun
AAML
338
7
0
23 Mar 2023
Revisiting Adversarial Training for ImageNet: Architectures, Training
  and Generalization across Threat Models
Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat ModelsNeural Information Processing Systems (NeurIPS), 2023
Naman D. Singh
Francesco Croce
Matthias Hein
OOD
452
100
0
03 Mar 2023
MultiRobustBench: Benchmarking Robustness Against Multiple Attacks
MultiRobustBench: Benchmarking Robustness Against Multiple AttacksInternational Conference on Machine Learning (ICML), 2023
Sihui Dai
Saeed Mahloujifar
Chong Xiang
Vikash Sehwag
Pin-Yu Chen
Prateek Mittal
AAMLOOD
348
10
0
21 Feb 2023
Seasoning Model Soups for Robustness to Adversarial and Natural
  Distribution Shifts
Seasoning Model Soups for Robustness to Adversarial and Natural Distribution ShiftsComputer Vision and Pattern Recognition (CVPR), 2023
Francesco Croce
Sylvestre-Alvise Rebuffi
Evan Shelhamer
Sven Gowal
AAML
243
23
0
20 Feb 2023
Multiple Perturbation Attack: Attack Pixelwise Under Different
  $\ell_p$-norms For Better Adversarial Performance
Multiple Perturbation Attack: Attack Pixelwise Under Different ℓp\ell_pℓp​-norms For Better Adversarial Performance
Ngoc N. Tran
Anh Tuan Bui
Dinh Q. Phung
Trung Le
AAML
260
1
0
05 Dec 2022
Can we achieve robustness from data alone?
Can we achieve robustness from data alone?
Nikolaos Tsilivis
Jingtong Su
Julia Kempe
OODDD
462
20
0
24 Jul 2022
Sparse Visual Counterfactual Explanations in Image Space
Sparse Visual Counterfactual Explanations in Image SpaceGerman Conference on Pattern Recognition (GCPR), 2022
Valentyn Boreiko
Maximilian Augustin
Francesco Croce
Philipp Berens
Matthias Hein
BDLCML
426
34
0
16 May 2022
On the Impact of Hard Adversarial Instances on Overfitting in
  Adversarial Training
On the Impact of Hard Adversarial Instances on Overfitting in Adversarial Training
Chen Liu
Zhichao Huang
Mathieu Salzmann
Tong Zhang
Sabine Süsstrunk
AAML
412
15
0
14 Dec 2021
Meta-Learning the Search Distribution of Black-Box Random Search Based
  Adversarial Attacks
Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial AttacksNeural Information Processing Systems (NeurIPS), 2021
Maksym Yatsura
J. H. Metzen
Matthias Hein
OOD
519
15
0
02 Nov 2021
Protein Folding Neural Networks Are Not Robust
Protein Folding Neural Networks Are Not Robust
Sumit Kumar Jha
Arvind Ramanathan
Rickard Ewetz
Alvaro Velasquez
Susmit Jha
AAML
359
22
0
09 Sep 2021
Adversarial Robustness against Multiple and Single $l_p$-Threat Models
  via Quick Fine-Tuning of Robust Classifiers
Adversarial Robustness against Multiple and Single lpl_plp​-Threat Models via Quick Fine-Tuning of Robust ClassifiersInternational Conference on Machine Learning (ICML), 2021
Francesco Croce
Matthias Hein
OODAAML
312
25
0
26 May 2021
Internal Wasserstein Distance for Adversarial Attack and Defense
Internal Wasserstein Distance for Adversarial Attack and Defense
Jincheng Li
Shuhai Zhang
Jingyun Liang
Jian Chen
Zhuliang Yu
Yang Xiang
AAML
371
4
0
13 Mar 2021
Sparse-RS: a versatile framework for query-efficient sparse black-box
  adversarial attacks
Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacksAAAI Conference on Artificial Intelligence (AAAI), 2020
Francesco Croce
Maksym Andriushchenko
Naman D. Singh
Nicolas Flammarion
Matthias Hein
463
134
0
23 Jun 2020
Learning to Generate Noise for Multi-Attack Robustness
Learning to Generate Noise for Multi-Attack Robustness
Divyam Madaan
Jinwoo Shin
Sung Ju Hwang
NoLaAAML
566
30
0
22 Jun 2020
Towards Backdoor Attacks and Defense in Robust Machine Learning Models
Towards Backdoor Attacks and Defense in Robust Machine Learning ModelsComputers & security (CS), 2020
E. Soremekun
Sakshi Udeshi
Sudipta Chattopadhyay
AAML
296
15
0
25 Feb 2020
1
Page 1 of 1