ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.02006
  4. Cited By
Why adversarial training can hurt robust accuracy

Why adversarial training can hurt robust accuracy

International Conference on Learning Representations (ICLR), 2022
3 March 2022
Jacob Clarysse
Julia Hörrmann
Fanny Yang
    AAML
ArXiv (abs)PDFHTML

Papers citing "Why adversarial training can hurt robust accuracy"

11 / 11 papers shown
Title
Top-GAP: Integrating Size Priors in CNNs for more Interpretability,
  Robustness, and Bias Mitigation
Top-GAP: Integrating Size Priors in CNNs for more Interpretability, Robustness, and Bias Mitigation
Lars Nieradzik
Henrike Stephani
Janis Keuper
FAttAAML
162
1
0
07 Sep 2024
Approximate Nullspace Augmented Finetuning for Robust Vision Transformers
Approximate Nullspace Augmented Finetuning for Robust Vision Transformers
Haoyang Liu
Aditya Singh
Yijiang Li
Haohan Wang
AAMLViT
238
1
0
15 Mar 2024
Specification Overfitting in Artificial Intelligence
Specification Overfitting in Artificial IntelligenceArtificial Intelligence Review (Artif Intell Rev), 2024
Benjamin Roth
Pedro Henrique Luz de Araujo
Yuxi Xia
Saskia Kaltenbrunner
Christoph Korab
461
8
0
13 Mar 2024
Adversarially Robust Feature Learning for Breast Cancer Diagnosis
Adversarially Robust Feature Learning for Breast Cancer Diagnosis
Degan Hao
Dooman Arefan
M. Zuley
Wendie Berg
Shandong Wu
OODMedIm
128
4
0
13 Feb 2024
Accuracy of TextFooler black box adversarial attacks on 01 loss sign
  activation neural network ensemble
Accuracy of TextFooler black box adversarial attacks on 01 loss sign activation neural network ensemble
Yunzhe Xue
Usman Roshan
AAML
126
0
0
12 Feb 2024
Improving the Robustness of Transformer-based Large Language Models with
  Dynamic Attention
Improving the Robustness of Transformer-based Large Language Models with Dynamic AttentionNetwork and Distributed System Security Symposium (NDSS), 2023
Lujia Shen
Yuwen Pu
R. Beyah
Changjiang Li
Xuhong Zhang
Chunpeng Ge
Ting Wang
AAML
119
8
0
29 Nov 2023
Theoretical Analysis of Robust Overfitting for Wide DNNs: An NTK
  Approach
Theoretical Analysis of Robust Overfitting for Wide DNNs: An NTK ApproachIEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2023
Shaopeng Fu
Haiyan Zhao
AAML
223
3
0
09 Oct 2023
Mitigating Adversarial Attacks in Federated Learning with Trusted
  Execution Environments
Mitigating Adversarial Attacks in Federated Learning with Trusted Execution EnvironmentsIEEE International Conference on Distributed Computing Systems (ICDCS), 2023
Simon Queyrut
V. Schiavoni
Pascal Felber
AAMLFedML
132
13
0
13 Sep 2023
Pelta: Shielding Transformers to Mitigate Evasion Attacks in Federated
  Learning
Pelta: Shielding Transformers to Mitigate Evasion Attacks in Federated Learning
Simon Queyrut
Yérom-David Bromberg
V. Schiavoni
FedMLAAML
126
1
0
08 Aug 2023
How robust accuracy suffers from certified training with convex
  relaxations
How robust accuracy suffers from certified training with convex relaxations
Piersilvio De Bartolomeis
Jacob Clarysse
Amartya Sanyal
Fanny Yang
AAML
126
2
0
12 Jun 2023
Devil is in Channels: Contrastive Single Domain Generalization for
  Medical Image Segmentation
Devil is in Channels: Contrastive Single Domain Generalization for Medical Image SegmentationInternational Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2023
Shishuai Hu
Zehui Liao
Yong-quan Xia
MedIm
239
44
0
08 Jun 2023
1