ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.16241
  4. Cited By
The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution
  Generalization

The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization

29 June 2020
Dan Hendrycks
Steven Basart
Norman Mu
Saurav Kadavath
Frank Wang
Evan Dorundo
R. Desai
Tyler Lixuan Zhu
Samyak Parajuli
Mike Guo
D. Song
Jacob Steinhardt
Justin Gilmer
    OOD
ArXivPDFHTML

Papers citing "The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization"

50 / 1,250 papers shown
Title
Do Deep Networks Transfer Invariances Across Classes?
Do Deep Networks Transfer Invariances Across Classes?
Allan Zhou
Fahim Tajwar
Alexander Robey
Tom Knowles
George J. Pappas
Hamed Hassani
Chelsea Finn
OOD
21
18
0
18 Mar 2022
Conditional Prompt Learning for Vision-Language Models
Conditional Prompt Learning for Vision-Language Models
Kaiyang Zhou
Jingkang Yang
Chen Change Loy
Ziwei Liu
VLM
CLIP
VPVLM
25
1,283
0
10 Mar 2022
Model soups: averaging weights of multiple fine-tuned models improves
  accuracy without increasing inference time
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Mitchell Wortsman
Gabriel Ilharco
S. Gadre
Rebecca Roelofs
Raphael Gontijo-Lopes
...
Hongseok Namkoong
Ali Farhadi
Y. Carmon
Simon Kornblith
Ludwig Schmidt
MoMe
42
906
1
10 Mar 2022
Joint rotational invariance and adversarial training of a dual-stream
  Transformer yields state of the art Brain-Score for Area V4
Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Score for Area V4
William Berrios
Arturo Deza
MedIm
ViT
12
13
0
08 Mar 2022
Geodesic Multi-Modal Mixup for Robust Fine-Tuning
Geodesic Multi-Modal Mixup for Robust Fine-Tuning
Changdae Oh
Junhyuk So
Hoyoon Byun
Yongtaek Lim
Minchul Shin
Jong-June Jeon
Kyungwoo Song
21
26
0
08 Mar 2022
ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches
ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches
Maura Pintor
Daniele Angioni
Angelo Sotgiu
Luca Demetrio
Ambra Demontis
Battista Biggio
Fabio Roli
AAML
25
49
0
07 Mar 2022
Concept-based Explanations for Out-Of-Distribution Detectors
Concept-based Explanations for Out-Of-Distribution Detectors
Jihye Choi
Jayaram Raghuram
Ryan Feng
Jiefeng Chen
S. Jha
Atul Prakash
OODD
11
12
0
04 Mar 2022
3D Common Corruptions and Data Augmentation
3D Common Corruptions and Data Augmentation
Oğuzhan Fatih Kar
Teresa Yeo
Andrei Atanov
Amir Zamir
3DPC
31
107
0
02 Mar 2022
DeepNet: Scaling Transformers to 1,000 Layers
DeepNet: Scaling Transformers to 1,000 Layers
Hongyu Wang
Shuming Ma
Li Dong
Shaohan Huang
Dongdong Zhang
Furu Wei
MoE
AI4CE
15
155
0
01 Mar 2022
ARIA: Adversarially Robust Image Attribution for Content Provenance
ARIA: Adversarially Robust Image Attribution for Content Provenance
Maksym Andriushchenko
X. Li
Geoffrey Oxholm
Thomas Gittings
Tu Bui
Nicolas Flammarion
John Collomosse
AAML
11
0
0
25 Feb 2022
Improving generalization with synthetic training data for deep learning
  based quality inspection
Improving generalization with synthetic training data for deep learning based quality inspection
Antoine Cordier
Pierre Gutierrez
Victoire Plessis
19
2
0
25 Feb 2022
On Modality Bias Recognition and Reduction
On Modality Bias Recognition and Reduction
Yangyang Guo
Liqiang Nie
Harry Cheng
Zhiyong Cheng
Mohan S. Kankanhalli
A. Bimbo
25
25
0
25 Feb 2022
Fine-Tuning can Distort Pretrained Features and Underperform
  Out-of-Distribution
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Ananya Kumar
Aditi Raghunathan
Robbie Jones
Tengyu Ma
Percy Liang
OODD
39
640
0
21 Feb 2022
Deconstructing Distributions: A Pointwise Framework of Learning
Deconstructing Distributions: A Pointwise Framework of Learning
Gal Kaplun
Nikhil Ghosh
Saurabh Garg
Boaz Barak
Preetum Nakkiran
OOD
25
21
0
20 Feb 2022
Vision Models Are More Robust And Fair When Pretrained On Uncurated
  Images Without Supervision
Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Priya Goyal
Quentin Duval
Isaac Seessel
Mathilde Caron
Ishan Misra
Levent Sagun
Armand Joulin
Piotr Bojanowski
VLM
SSL
23
110
0
16 Feb 2022
Predicting Out-of-Distribution Error with the Projection Norm
Predicting Out-of-Distribution Error with the Projection Norm
Yaodong Yu
Zitong Yang
Alexander Wei
Yi-An Ma
Jacob Steinhardt
OODD
7
43
0
11 Feb 2022
Electricity Consumption Forecasting for Out-of-distribution Time-of-Use
  Tariffs
Electricity Consumption Forecasting for Out-of-distribution Time-of-Use Tariffs
Jyoti Narwariya
Chetan Verma
Pankaj Malhotra
L. Vig
E. Subramanian
Sanjay Bhat
AI4TS
17
2
0
11 Feb 2022
The Lifecycle of a Statistical Model: Model Failure Detection,
  Identification, and Refitting
The Lifecycle of a Statistical Model: Model Failure Detection, Identification, and Refitting
Alnur Ali
Maxime Cauchois
John C. Duchi
11
2
0
08 Feb 2022
If a Human Can See It, So Should Your System: Reliability Requirements
  for Machine Vision Components
If a Human Can See It, So Should Your System: Reliability Requirements for Machine Vision Components
Boyue Caroline Hu
Lina Marsso
Krzysztof Czarnecki
Rick Salay
Huakun Shen
Marsha Chechik
11
21
0
08 Feb 2022
Benchmarking and Analyzing Point Cloud Classification under Corruptions
Benchmarking and Analyzing Point Cloud Classification under Corruptions
Jiawei Ren
Liang Pan
Ziwei Liu
3DPC
8
80
0
07 Feb 2022
Nonparametric Uncertainty Quantification for Single Deterministic Neural
  Network
Nonparametric Uncertainty Quantification for Single Deterministic Neural Network
Nikita Kotelevskii
A. Artemenkov
Kirill Fedyanin
Fedor Noskov
Alexander Fishkov
Artem Shelmanov
Artem Vazhentsev
Aleksandr Petiushko
Maxim Panov
UQCV
BDL
48
25
0
07 Feb 2022
The Unreasonable Effectiveness of Random Pruning: Return of the Most
  Naive Baseline for Sparse Training
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Li Shen
D. Mocanu
Zhangyang Wang
Mykola Pechenizkiy
11
106
0
05 Feb 2022
NoisyMix: Boosting Model Robustness to Common Corruptions
NoisyMix: Boosting Model Robustness to Common Corruptions
N. Benjamin Erichson
S. H. Lim
Winnie Xu
Francisco Utrera
Ziang Cao
Michael W. Mahoney
19
17
0
02 Feb 2022
Improving Robustness by Enhancing Weak Subnets
Improving Robustness by Enhancing Weak Subnets
Yong Guo
David Stutz
Bernt Schiele
AAML
14
15
0
30 Jan 2022
Describing Differences between Text Distributions with Natural Language
Describing Differences between Text Distributions with Natural Language
Ruiqi Zhong
Charles Burton Snell
Dan Klein
Jacob Steinhardt
VLM
122
42
0
28 Jan 2022
A Survey on Visual Transfer Learning using Knowledge Graphs
A Survey on Visual Transfer Learning using Knowledge Graphs
Sebastian Monka
Lavdim Halilaj
Achim Rettinger
19
23
0
27 Jan 2022
How Robust are Discriminatively Trained Zero-Shot Learning Models?
How Robust are Discriminatively Trained Zero-Shot Learning Models?
M. K. Yucel
R. G. Cinbis
Pinar Duygulu
18
14
0
26 Jan 2022
AugLy: Data Augmentations for Robustness
AugLy: Data Augmentations for Robustness
Zoe Papakipos
Joanna Bitton
AAML
29
52
0
17 Jan 2022
Adversarial Machine Learning Threat Analysis and Remediation in Open
  Radio Access Network (O-RAN)
Adversarial Machine Learning Threat Analysis and Remediation in Open Radio Access Network (O-RAN)
Edan Habler
Ron Bitton
D. Avraham
D. Mimran
Eitan Klevansky
Oleg Brodt
Heiko Lehmann
Yuval Elovici
A. Shabtai
AAML
31
12
0
16 Jan 2022
Transferability in Deep Learning: A Survey
Transferability in Deep Learning: A Survey
Junguang Jiang
Yang Shu
Jianmin Wang
Mingsheng Long
OOD
17
100
0
15 Jan 2022
Pushing the limits of self-supervised ResNets: Can we outperform
  supervised learning without labels on ImageNet?
Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?
Nenad Tomašev
Ioana Bica
Brian McWilliams
Lars Buesing
Razvan Pascanu
Charles Blundell
Jovana Mitrović
SSL
71
80
0
13 Jan 2022
Leveraging Unlabeled Data to Predict Out-of-Distribution Performance
Leveraging Unlabeled Data to Predict Out-of-Distribution Performance
Saurabh Garg
Sivaraman Balakrishnan
Zachary Chase Lipton
Behnam Neyshabur
Hanie Sedghi
OODD
OOD
32
124
0
11 Jan 2022
A ConvNet for the 2020s
A ConvNet for the 2020s
Zhuang Liu
Hanzi Mao
Chaozheng Wu
Christoph Feichtenhofer
Trevor Darrell
Saining Xie
ViT
40
4,945
0
10 Jan 2022
Towards Transferable Unrestricted Adversarial Examples with Minimum
  Changes
Towards Transferable Unrestricted Adversarial Examples with Minimum Changes
Fangcheng Liu
Chaoning Zhang
Hongyang R. Zhang
AAML
21
18
0
04 Jan 2022
Turath-150K: Image Database of Arab Heritage
Turath-150K: Image Database of Arab Heritage
Dani Kiyasseh
Rasheed el-Bouri
13
0
0
01 Jan 2022
Optimal Representations for Covariate Shift
Optimal Representations for Covariate Shift
Yangjun Ruan
Yann Dubois
Chris J. Maddison
OOD
18
68
0
31 Dec 2021
PRIME: A few primitives can boost robustness to common corruptions
PRIME: A few primitives can boost robustness to common corruptions
Apostolos Modas
Rahul Rade
Guillermo Ortiz-Jiménez
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
16
41
0
27 Dec 2021
Pre-Training Transformers for Domain Adaptation
Pre-Training Transformers for Domain Adaptation
Burhan Ul Tayyab
Nicholas Chua
ViT
13
2
0
18 Dec 2021
Do You See What I See? Capabilities and Limits of Automated Multimedia
  Content Analysis
Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis
Carey Shenkman
Dhanaraj Thakur
Emma Llansó
14
8
0
15 Dec 2021
PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures
PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures
Dan Hendrycks
Andy Zou
Mantas Mazeika
Leonard Tang
Bo-wen Li
D. Song
Jacob Steinhardt
UQCV
21
136
0
09 Dec 2021
3D-VField: Adversarial Augmentation of Point Clouds for Domain
  Generalization in 3D Object Detection
3D-VField: Adversarial Augmentation of Point Clouds for Domain Generalization in 3D Object Detection
Alexander Lehner
Stefano Gasperini
Alvaro Marcos-Ramiro
Michael Schmidt
M. N. Mahani
Nassir Navab
Benjamin Busam
F. Tombari
3DPC
21
51
0
09 Dec 2021
Dilated convolution with learnable spacings
Dilated convolution with learnable spacings
Ismail Khalfaoui-Hassani
Thomas Pellegrini
T. Masquelier
8
31
0
07 Dec 2021
Benchmark for Out-of-Distribution Detection in Deep Reinforcement
  Learning
Benchmark for Out-of-Distribution Detection in Deep Reinforcement Learning
Aaqib Parvez Mohammed
Matias Valdenegro-Toro
OOD
OffRL
16
10
0
05 Dec 2021
Dynamic Token Normalization Improves Vision Transformers
Dynamic Token Normalization Improves Vision Transformers
Wenqi Shao
Yixiao Ge
Zhaoyang Zhang
Xuyuan Xu
Xiaogang Wang
Ying Shan
Ping Luo
ViT
121
11
0
05 Dec 2021
SITA: Single Image Test-time Adaptation
SITA: Single Image Test-time Adaptation
Ansh Khurana
S. Paul
Piyush Rai
Soma Biswas
Gaurav Aggarwal
69
52
0
04 Dec 2021
Reward-Free Attacks in Multi-Agent Reinforcement Learning
Reward-Free Attacks in Multi-Agent Reinforcement Learning
Ted Fujimoto
T. Doster
A. Attarian
Jill M. Brandenberger
Nathan Oken Hodas
AAML
19
4
0
02 Dec 2021
Certified Adversarial Defenses Meet Out-of-Distribution Corruptions:
  Benchmarking Robustness and Simple Baselines
Certified Adversarial Defenses Meet Out-of-Distribution Corruptions: Benchmarking Robustness and Simple Baselines
Jiachen Sun
Akshay Mehra
B. Kailkhura
Pin-Yu Chen
Dan Hendrycks
Jihun Hamm
Z. Morley Mao
AAML
17
21
0
01 Dec 2021
A Systematic Review of Robustness in Deep Learning for Computer Vision:
  Mind the gap?
A Systematic Review of Robustness in Deep Learning for Computer Vision: Mind the gap?
Nathan G. Drenkow
Numair Sani
I. Shpitser
Mathias Unberath
16
73
0
01 Dec 2021
Pyramid Adversarial Training Improves ViT Performance
Pyramid Adversarial Training Improves ViT Performance
Charles Herrmann
Kyle Sargent
Lu Jiang
Ramin Zabih
Huiwen Chang
Ce Liu
Dilip Krishnan
Deqing Sun
ViT
18
56
0
30 Nov 2021
DAFormer: Improving Network Architectures and Training Strategies for
  Domain-Adaptive Semantic Segmentation
DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation
Lukas Hoyer
Dengxin Dai
Luc Van Gool
AI4CE
14
450
0
29 Nov 2021
Previous
123...2122232425
Next