ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.15831
  4. Cited By
The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning

The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning

30 June 2021
Anders Andreassen
Yasaman Bahri
Behnam Neyshabur
Rebecca Roelofs
    OOD
    OODD
ArXivPDFHTML

Papers citing "The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning"

11 / 61 papers shown
Title
Understanding out-of-distribution accuracies through quantifying
  difficulty of test samples
Understanding out-of-distribution accuracies through quantifying difficulty of test samples
Berfin Simsek
Melissa Hall
Levent Sagun
15
5
0
28 Mar 2022
Are Vision Transformers Robust to Spurious Correlations?
Are Vision Transformers Robust to Spurious Correlations?
Soumya Suvra Ghosal
Yifei Ming
Yixuan Li
ViT
23
28
0
17 Mar 2022
Model soups: averaging weights of multiple fine-tuned models improves
  accuracy without increasing inference time
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Mitchell Wortsman
Gabriel Ilharco
S. Gadre
Rebecca Roelofs
Raphael Gontijo-Lopes
...
Hongseok Namkoong
Ali Farhadi
Y. Carmon
Simon Kornblith
Ludwig Schmidt
MoMe
36
906
1
10 Mar 2022
Do better ImageNet classifiers assess perceptual similarity better?
Do better ImageNet classifiers assess perceptual similarity better?
Manoj Kumar
N. Houlsby
Nal Kalchbrenner
E. D. Cubuk
14
31
0
09 Mar 2022
Fine-Tuning can Distort Pretrained Features and Underperform
  Out-of-Distribution
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Ananya Kumar
Aditi Raghunathan
Robbie Jones
Tengyu Ma
Percy Liang
OODD
25
638
0
21 Feb 2022
Deep Ensembles Work, But Are They Necessary?
Deep Ensembles Work, But Are They Necessary?
Taiga Abe
E. Kelly Buchanan
Geoff Pleiss
R. Zemel
John P. Cunningham
OOD
UQCV
16
58
0
14 Feb 2022
A benchmark with decomposed distribution shifts for 360 monocular depth
  estimation
A benchmark with decomposed distribution shifts for 360 monocular depth estimation
G. Albanis
N. Zioulis
Petros Drakoulis
Federico Álvarez
D. Zarpalas
P. Daras
MDE
20
0
0
01 Dec 2021
No One Representation to Rule Them All: Overlapping Features of Training
  Methods
No One Representation to Rule Them All: Overlapping Features of Training Methods
Raphael Gontijo-Lopes
Yann N. Dauphin
E. D. Cubuk
18
58
0
20 Oct 2021
Robust fine-tuning of zero-shot models
Robust fine-tuning of zero-shot models
Mitchell Wortsman
Gabriel Ilharco
Jong Wook Kim
Mike Li
Simon Kornblith
...
Raphael Gontijo-Lopes
Hannaneh Hajishirzi
Ali Farhadi
Hongseok Namkoong
Ludwig Schmidt
VLM
21
687
0
04 Sep 2021
Improving Self-supervised Learning with Hardness-aware Dynamic
  Curriculum Learning: An Application to Digital Pathology
Improving Self-supervised Learning with Hardness-aware Dynamic Curriculum Learning: An Application to Digital Pathology
C. Srinidhi
Anne L. Martel
23
22
0
16 Aug 2021
Accuracy on the Line: On the Strong Correlation Between
  Out-of-Distribution and In-Distribution Generalization
Accuracy on the Line: On the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization
John Miller
Rohan Taori
Aditi Raghunathan
Shiori Sagawa
Pang Wei Koh
Vaishaal Shankar
Percy Liang
Y. Carmon
Ludwig Schmidt
OODD
OOD
14
265
0
09 Jul 2021
Previous
12