ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.13863
  4. Cited By
Why Robust Generalization in Deep Learning is Difficult: Perspective of
  Expressive Power

Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power

27 May 2022
Binghui Li
Jikai Jin
Han Zhong
J. Hopcroft
Liwei Wang
    OOD
ArXivPDFHTML

Papers citing "Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power"

19 / 19 papers shown
Title
Generalizability of Neural Networks Minimizing Empirical Risk Based on Expressive Ability
Lijia Yu
Yibo Miao
Yifan Zhu
Xiao-Shan Gao
Lijun Zhang
48
0
0
06 Mar 2025
Curse of Dimensionality in Neural Network Optimization
Sanghoon Na
Haizhao Yang
46
0
0
07 Feb 2025
To Measure or Not: A Cost-Sensitive, Selective Measuring Environment for Agricultural Management Decisions with Reinforcement Learning
To Measure or Not: A Cost-Sensitive, Selective Measuring Environment for Agricultural Management Decisions with Reinforcement Learning
Hilmy Baja
Michiel Kallenberg
Ioannis Athanasiadis
OffRL
46
0
0
22 Jan 2025
Generalizability of Memorization Neural Networks
Generalizability of Memorization Neural Networks
Lijia Yu
Xiao-Shan Gao
Lijun Zhang
Yibo Miao
28
1
0
01 Nov 2024
Adversarial Training Can Provably Improve Robustness: Theoretical Analysis of Feature Learning Process Under Structured Data
Adversarial Training Can Provably Improve Robustness: Theoretical Analysis of Feature Learning Process Under Structured Data
Binghui Li
Yuanzhi Li
OOD
28
2
0
11 Oct 2024
Life, uh, Finds a Way: Systematic Neural Search
Life, uh, Finds a Way: Systematic Neural Search
Alex Baranski
Jun Tani
18
0
0
02 Oct 2024
Over-parameterization and Adversarial Robustness in Neural Networks: An
  Overview and Empirical Analysis
Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis
Zhang Chen
Luca Demetrio
Srishti Gupta
Xiaoyi Feng
Zhaoqiang Xia
...
Maura Pintor
Luca Oneto
Ambra Demontis
Battista Biggio
Fabio Roli
AAML
26
1
0
14 Jun 2024
From Robustness to Improved Generalization and Calibration in
  Pre-trained Language Models
From Robustness to Improved Generalization and Calibration in Pre-trained Language Models
Josip Jukić
Jan Snajder
26
0
0
31 Mar 2024
Towards White Box Deep Learning
Towards White Box Deep Learning
Maciej Satkiewicz
AAML
27
1
0
14 Mar 2024
Deep Networks Always Grok and Here is Why
Deep Networks Always Grok and Here is Why
Ahmed Imtiaz Humayun
Randall Balestriero
Richard Baraniuk
AAML
OOD
AI4CE
43
19
0
23 Feb 2024
Is Adversarial Training with Compressed Datasets Effective?
Is Adversarial Training with Compressed Datasets Effective?
Tong Chen
Raghavendra Selvan
AAML
48
0
0
08 Feb 2024
Can overfitted deep neural networks in adversarial training generalize?
  -- An approximation viewpoint
Can overfitted deep neural networks in adversarial training generalize? -- An approximation viewpoint
Zhongjie Shi
Fanghui Liu
Yuan Cao
Johan A. K. Suykens
30
0
0
24 Jan 2024
Data-Dependent Stability Analysis of Adversarial Training
Data-Dependent Stability Analysis of Adversarial Training
Yihan Wang
Shuang Liu
Xiao-Shan Gao
30
3
0
06 Jan 2024
Towards Understanding Clean Generalization and Robust Overfitting in
  Adversarial Training
Towards Understanding Clean Generalization and Robust Overfitting in Adversarial Training
Binghui Li
Yuanzhi Li
AAML
24
3
0
02 Jun 2023
On the Importance of Backbone to the Adversarial Robustness of Object Detectors
On the Importance of Backbone to the Adversarial Robustness of Object Detectors
Xiao-Li Li
Hang Chen
Xiaolin Hu
AAML
34
4
0
27 May 2023
It Is All About Data: A Survey on the Effects of Data on Adversarial
  Robustness
It Is All About Data: A Survey on the Effects of Data on Adversarial Robustness
Peiyu Xiong
Michael W. Tegegn
Jaskeerat Singh Sarin
Shubhraneel Pal
Julia Rubin
SILM
AAML
32
8
0
17 Mar 2023
Understanding CNN Fragility When Learning With Imbalanced Data
Understanding CNN Fragility When Learning With Imbalanced Data
Damien Dablain
Kristen N. Jacobson
C. Bellinger
Mark Roberts
Nitesh V. Chawla
11
39
0
17 Oct 2022
Optimal Approximation Rate of ReLU Networks in terms of Width and Depth
Optimal Approximation Rate of ReLU Networks in terms of Width and Depth
Zuowei Shen
Haizhao Yang
Shijun Zhang
90
115
0
28 Feb 2021
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
123
602
0
14 Feb 2016
1