ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.04284
  4. Cited By
Improved Sample Complexities for Deep Networks and Robust Classification
  via an All-Layer Margin

Improved Sample Complexities for Deep Networks and Robust Classification via an All-Layer Margin

9 October 2019
Colin Wei
Tengyu Ma
    AAML
    OOD
ArXivPDFHTML

Papers citing "Improved Sample Complexities for Deep Networks and Robust Classification via an All-Layer Margin"

17 / 17 papers shown
Title
Multiplicative Logit Adjustment Approximates Neural-Collapse-Aware Decision Boundary Adjustment
Multiplicative Logit Adjustment Approximates Neural-Collapse-Aware Decision Boundary Adjustment
Naoya Hasegawa
Issei Sato
34
0
0
26 Sep 2024
Theoretical Analysis of Weak-to-Strong Generalization
Theoretical Analysis of Weak-to-Strong Generalization
Hunter Lang
David Sontag
Aravindan Vijayaraghavan
21
19
0
25 May 2024
Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To
  Achieve Better Generalization
Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To Achieve Better Generalization
Kaiyue Wen
Zhiyuan Li
Tengyu Ma
FAtt
22
26
0
20 Jul 2023
How Does Sharpness-Aware Minimization Minimize Sharpness?
How Does Sharpness-Aware Minimization Minimize Sharpness?
Kaiyue Wen
Tengyu Ma
Zhiyuan Li
AAML
21
47
0
10 Nov 2022
A PAC-Bayesian Generalization Bound for Equivariant Networks
A PAC-Bayesian Generalization Bound for Equivariant Networks
Arash Behboodi
Gabriele Cesa
Taco S. Cohen
38
17
0
24 Oct 2022
Information FOMO: The unhealthy fear of missing out on information. A
  method for removing misleading data for healthier models
Information FOMO: The unhealthy fear of missing out on information. A method for removing misleading data for healthier models
Ethan Pickering
T. Sapsis
16
6
0
27 Aug 2022
Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
  Physically-Grounded Augmentations
Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level Physically-Grounded Augmentations
Tianlong Chen
Peihao Wang
Zhiwen Fan
Zhangyang Wang
19
55
0
04 Jul 2022
On the Role of Generalization in Transferability of Adversarial Examples
On the Role of Generalization in Transferability of Adversarial Examples
Yilin Wang
Farzan Farnia
AAML
24
10
0
18 Jun 2022
Adversarial robustness of sparse local Lipschitz predictors
Adversarial robustness of sparse local Lipschitz predictors
Ramchandran Muthukumar
Jeremias Sulam
AAML
19
13
0
26 Feb 2022
Improved Regularization and Robustness for Fine-tuning in Neural
  Networks
Improved Regularization and Robustness for Fine-tuning in Neural Networks
Dongyue Li
Hongyang R. Zhang
NoLa
47
54
0
08 Nov 2021
A Scaling Law for Synthetic-to-Real Transfer: How Much Is Your
  Pre-training Effective?
A Scaling Law for Synthetic-to-Real Transfer: How Much Is Your Pre-training Effective?
Hiroaki Mikami
Kenji Fukumizu
Shogo Murai
Shuji Suzuki
Yuta Kikuchi
Taiji Suzuki
S. Maeda
Kohei Hayashi
25
12
0
25 Aug 2021
Adversarial Learning Guarantees for Linear Hypotheses and Neural
  Networks
Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks
Pranjal Awasthi
Natalie Frank
M. Mohri
AAML
8
56
0
28 Apr 2020
The Curious Case of Adversarially Robust Models: More Data Can Help,
  Double Descend, or Hurt Generalization
The Curious Case of Adversarially Robust Models: More Data Can Help, Double Descend, or Hurt Generalization
Yifei Min
Lin Chen
Amin Karbasi
AAML
11
69
0
25 Feb 2020
CAT: Customized Adversarial Training for Improved Robustness
CAT: Customized Adversarial Training for Improved Robustness
Minhao Cheng
Qi Lei
Pin-Yu Chen
Inderjit Dhillon
Cho-Jui Hsieh
OOD
AAML
11
114
0
17 Feb 2020
Angular Visual Hardness
Angular Visual Hardness
Beidi Chen
Weiyang Liu
Zhiding Yu
Jan Kautz
Anshumali Shrivastava
Animesh Garg
Anima Anandkumar
AAML
30
51
0
04 Dec 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,886
0
15 Sep 2016
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
114
577
0
27 Feb 2015
1