ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.15831
  4. Cited By
The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning

The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning

30 June 2021
Anders Andreassen
Yasaman Bahri
Behnam Neyshabur
Rebecca Roelofs
    OOD
    OODD
ArXivPDFHTML

Papers citing "The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning"

50 / 61 papers shown
Title
Dual Risk Minimization: Towards Next-Level Robustness in Fine-tuning
  Zero-Shot Models
Dual Risk Minimization: Towards Next-Level Robustness in Fine-tuning Zero-Shot Models
Kaican Li
Weiyan Xie
Yongxiang Huang
Didan Deng
Lanqing Hong
Z. Li
Ricardo Silva
N. Zhang
62
0
0
29 Nov 2024
LAGUNA: LAnguage Guided UNsupervised Adaptation with structured spaces
LAGUNA: LAnguage Guided UNsupervised Adaptation with structured spaces
Anxhelo Diko
Antonino Furnari
Luigi Cinque
G. Farinella
88
0
0
23 Nov 2024
They're All Doctors: Synthesizing Diverse Counterfactuals to Mitigate
  Associative Bias
They're All Doctors: Synthesizing Diverse Counterfactuals to Mitigate Associative Bias
Salma Abdel Magid
Jui-Hsien Wang
Kushal Kafle
Hanspeter Pfister
28
1
0
17 Jun 2024
On the Use of Anchoring for Training Vision Models
On the Use of Anchoring for Training Vision Models
V. Narayanaswamy
Kowshik Thopalli
Rushil Anirudh
Yamen Mubarka
W. Sakla
Jayaraman J. Thiagarajan
32
0
0
01 Jun 2024
CRoFT: Robust Fine-Tuning with Concurrent Optimization for OOD
  Generalization and Open-Set OOD Detection
CRoFT: Robust Fine-Tuning with Concurrent Optimization for OOD Generalization and Open-Set OOD Detection
Lin Zhu
Yifeng Yang
Qinying Gu
Xinbing Wang
Cheng Zhou
Nanyang Ye
VLM
22
2
0
26 May 2024
Feature Protection For Out-of-distribution Generalization
Feature Protection For Out-of-distribution Generalization
Lu Tan
Huei Zhou
Yinxiang Huang
Zeming Zheng
Yujiu Yang
OODD
27
0
0
25 May 2024
Aggregate Representation Measure for Predictive Model Reusability
Aggregate Representation Measure for Predictive Model Reusability
Vishwesh Sangarya
Richard M. Bradford
Jung-Eun Kim
27
2
0
15 May 2024
Robust Fine-tuning for Pre-trained 3D Point Cloud Models
Robust Fine-tuning for Pre-trained 3D Point Cloud Models
Zhibo Zhang
Ximing Yang
Weizhong Zhang
Cheng Jin
3DPC
36
1
0
25 Apr 2024
A noisy elephant in the room: Is your out-of-distribution detector
  robust to label noise?
A noisy elephant in the room: Is your out-of-distribution detector robust to label noise?
Galadrielle Humblot-Renaux
Sergio Escalera
T. Moeslund
OODD
UQCV
NoLa
22
5
0
02 Apr 2024
Towards Low-Energy Adaptive Personalization for Resource-Constrained
  Devices
Towards Low-Energy Adaptive Personalization for Resource-Constrained Devices
Yushan Huang
Josh Millar
Yuxuan Long
Yuchen Zhao
Hamed Haddadi
26
0
0
23 Mar 2024
Model Reprogramming Outperforms Fine-tuning on Out-of-distribution Data
  in Text-Image Encoders
Model Reprogramming Outperforms Fine-tuning on Out-of-distribution Data in Text-Image Encoders
Andrew Geng
Pin-Yu Chen
OODD
19
0
0
16 Mar 2024
Tell, Don't Show!: Language Guidance Eases Transfer Across Domains in
  Images and Videos
Tell, Don't Show!: Language Guidance Eases Transfer Across Domains in Images and Videos
Tarun Kalluri
Bodhisattwa Prasad Majumder
Manmohan Chandraker
VLM
24
4
0
08 Mar 2024
A Survey on Evaluation of Out-of-Distribution Generalization
A Survey on Evaluation of Out-of-Distribution Generalization
Han Yu
Jiashuo Liu
Xingxuan Zhang
Jiayun Wu
Peng Cui
OOD
42
9
0
04 Mar 2024
Ask Your Distribution Shift if Pre-Training is Right for You
Ask Your Distribution Shift if Pre-Training is Right for You
Benjamin Cohen-Wang
Joshua Vendrow
Aleksander Madry
OOD
16
3
0
29 Feb 2024
AutoFT: Learning an Objective for Robust Fine-Tuning
AutoFT: Learning an Objective for Robust Fine-Tuning
Caroline Choi
Yoonho Lee
Annie S. Chen
Allan Zhou
Aditi Raghunathan
Chelsea Finn
OOD
29
0
0
18 Jan 2024
Efficient Stitchable Task Adaptation
Efficient Stitchable Task Adaptation
Haoyu He
Zizheng Pan
Jing Liu
Jianfei Cai
Bohan Zhuang
21
3
0
29 Nov 2023
Towards Anytime Fine-tuning: Continually Pre-trained Language Models
  with Hypernetwork Prompt
Towards Anytime Fine-tuning: Continually Pre-trained Language Models with Hypernetwork Prompt
Gangwei Jiang
Caigao Jiang
Siqiao Xue
James Y. Zhang
Junqing Zhou
Defu Lian
Ying Wei
VLM
19
7
0
19 Oct 2023
Profit: Benchmarking Personalization and Robustness Trade-off in
  Federated Prompt Tuning
Profit: Benchmarking Personalization and Robustness Trade-off in Federated Prompt Tuning
Liam Collins
Shanshan Wu
Sewoong Oh
K. Sim
FedML
29
9
0
06 Oct 2023
Mitigating the Alignment Tax of RLHF
Mitigating the Alignment Tax of RLHF
Yong Lin
Hangyu Lin
Wei Xiong
Shizhe Diao
Zeming Zheng
...
Han Zhao
Nan Jiang
Heng Ji
Yuan Yao
Tong Zhang
MoMe
CLL
24
63
0
12 Sep 2023
VisAlign: Dataset for Measuring the Degree of Alignment between AI and
  Humans in Visual Perception
VisAlign: Dataset for Measuring the Degree of Alignment between AI and Humans in Visual Perception
Jiyoung Lee
Seung Wook Kim
Seunghyun Won
Joonseok Lee
Marzyeh Ghassemi
James Thorne
Jaeseok Choi
O.-Kil Kwon
E. Choi
18
1
0
03 Aug 2023
Improving Generalization of Adversarial Training via Robust Critical
  Fine-Tuning
Improving Generalization of Adversarial Training via Robust Critical Fine-Tuning
Kaijie Zhu
Jindong Wang
Xixu Hu
Xingxu Xie
G. Yang
AAML
12
23
0
01 Aug 2023
COCO-O: A Benchmark for Object Detectors under Natural Distribution
  Shifts
COCO-O: A Benchmark for Object Detectors under Natural Distribution Shifts
Xiaofeng Mao
YueFeng Chen
Yao Zhu
Da Chen
Hang Su
Rong Zhang
H. Xue
ObjD
OOD
21
18
0
24 Jul 2023
On the Connection between Pre-training Data Diversity and Fine-tuning
  Robustness
On the Connection between Pre-training Data Diversity and Fine-tuning Robustness
Vivek Ramanujan
Thao Nguyen
Sewoong Oh
Ludwig Schmidt
Ali Farhadi
OOD
14
21
0
24 Jul 2023
A Holistic Assessment of the Reliability of Machine Learning Systems
A Holistic Assessment of the Reliability of Machine Learning Systems
Anthony Corso
David Karamadian
Romeo Valentin
Mary Cooper
Mykel J. Kochenderfer
21
6
0
20 Jul 2023
An Empirical Study of Pre-trained Model Selection for Out-of-Distribution Generalization and Calibration
An Empirical Study of Pre-trained Model Selection for Out-of-Distribution Generalization and Calibration
Hiroki Naganuma
Ryuichiro Hataya
Kotaro Yoshida
Ioannis Mitliagkas
OODD
84
1
0
17 Jul 2023
Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for
  Out-of-Domain Detection
Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection
Rheeya Uppaal
Junjie Hu
Yixuan Li
OODD
114
33
0
22 May 2023
Accuracy on the Curve: On the Nonlinear Correlation of ML Performance
  Between Data Subpopulations
Accuracy on the Curve: On the Nonlinear Correlation of ML Performance Between Data Subpopulations
Weixin Liang
Yining Mao
Yongchan Kwon
Xinyu Yang
James Y. Zou
OODD
29
3
0
04 May 2023
A Closer Look at Model Adaptation using Feature Distortion and
  Simplicity Bias
A Closer Look at Model Adaptation using Feature Distortion and Simplicity Bias
Puja Trivedi
Danai Koutra
Jayaraman J. Thiagarajan
AAML
27
17
0
23 Mar 2023
Edit at your own risk: evaluating the robustness of edited models to
  distribution shifts
Edit at your own risk: evaluating the robustness of edited models to distribution shifts
Davis Brown
Charles Godfrey
Cody Nizinski
Jonathan Tu
Henry Kvinge
KELM
14
8
0
28 Feb 2023
Scaling Vision Transformers to 22 Billion Parameters
Scaling Vision Transformers to 22 Billion Parameters
Mostafa Dehghani
Josip Djolonga
Basil Mustafa
Piotr Padlewski
Jonathan Heek
...
Mario Luvcić
Xiaohua Zhai
Daniel Keysers
Jeremiah Harmsen
N. Houlsby
MLLM
56
562
0
10 Feb 2023
Effective Robustness against Natural Distribution Shifts for Models with
  Different Training Data
Effective Robustness against Natural Distribution Shifts for Models with Different Training Data
Zhouxing Shi
Nicholas Carlini
Ananth Balashankar
Ludwig Schmidt
Cho-Jui Hsieh
Alex Beutel
Yao Qin
OOD
21
9
0
02 Feb 2023
Leveraging Unlabeled Data to Track Memorization
Leveraging Unlabeled Data to Track Memorization
Mahsa Forouzesh
Hanie Sedghi
Patrick Thiran
NoLa
TDI
30
3
0
08 Dec 2022
Finetune like you pretrain: Improved finetuning of zero-shot vision
  models
Finetune like you pretrain: Improved finetuning of zero-shot vision models
Sachin Goyal
Ananya Kumar
Sankalp Garg
Zico Kolter
Aditi Raghunathan
CLIP
VLM
27
136
0
01 Dec 2022
Context-Aware Robust Fine-Tuning
Context-Aware Robust Fine-Tuning
Xiaofeng Mao
YueFeng Chen
Xiaojun Jia
Rong Zhang
Hui Xue
Zhao Li
VLM
CLIP
20
23
0
29 Nov 2022
Okapi: Generalising Better by Making Statistical Matches Match
Okapi: Generalising Better by Making Statistical Matches Match
Myles Bartlett
Sara Romiti
V. Sharmanska
Novi Quadrianto
24
3
0
07 Nov 2022
Exploring The Landscape of Distributional Robustness for Question
  Answering Models
Exploring The Landscape of Distributional Robustness for Question Answering Models
Anas Awadalla
Mitchell Wortsman
Gabriel Ilharco
Sewon Min
Ian H. Magnusson
Hannaneh Hajishirzi
Ludwig Schmidt
ELM
OOD
KELM
70
19
0
22 Oct 2022
Surgical Fine-Tuning Improves Adaptation to Distribution Shifts
Surgical Fine-Tuning Improves Adaptation to Distribution Shifts
Yoonho Lee
Annie S. Chen
Fahim Tajwar
Ananya Kumar
Huaxiu Yao
Percy Liang
Chelsea Finn
OOD
38
195
0
20 Oct 2022
Transfer Learning with Pretrained Remote Sensing Transformers
Transfer Learning with Pretrained Remote Sensing Transformers
A. Fuller
K. Millard
J.R. Green
20
11
0
28 Sep 2022
ID and OOD Performance Are Sometimes Inversely Correlated on Real-world
  Datasets
ID and OOD Performance Are Sometimes Inversely Correlated on Real-world Datasets
Damien Teney
Yong Lin
Seong Joon Oh
Ehsan Abbasnejad
OOD
360
47
0
01 Sep 2022
Patching open-vocabulary models by interpolating weights
Patching open-vocabulary models by interpolating weights
Gabriel Ilharco
Mitchell Wortsman
S. Gadre
Shuran Song
Hannaneh Hajishirzi
Simon Kornblith
Ali Farhadi
Ludwig Schmidt
VLM
KELM
14
166
0
10 Aug 2022
Quality Not Quantity: On the Interaction between Dataset Design and
  Robustness of CLIP
Quality Not Quantity: On the Interaction between Dataset Design and Robustness of CLIP
Thao Nguyen
Gabriel Ilharco
Mitchell Wortsman
Sewoong Oh
Ludwig Schmidt
CLIP
VLM
29
97
0
10 Aug 2022
On Transfer of Adversarial Robustness from Pretraining to Downstream
  Tasks
On Transfer of Adversarial Robustness from Pretraining to Downstream Tasks
Laura Fee Nern
Harsh Raj
Maurice Georgi
Yash Sharma
AAML
17
2
0
07 Aug 2022
Semantic Abstraction: Open-World 3D Scene Understanding from 2D
  Vision-Language Models
Semantic Abstraction: Open-World 3D Scene Understanding from 2D Vision-Language Models
Huy Ha
Shuran Song
LM&Ro
VLM
28
101
0
23 Jul 2022
Assaying Out-Of-Distribution Generalization in Transfer Learning
Assaying Out-Of-Distribution Generalization in Transfer Learning
F. Wenzel
Andrea Dittadi
Peter V. Gehler
Carl-Johann Simon-Gabriel
Max Horn
...
Chris Russell
Thomas Brox
Bernt Schiele
Bernhard Schölkopf
Francesco Locatello
OOD
OODD
AAML
41
71
0
19 Jul 2022
Models Out of Line: A Fourier Lens on Distribution Shift Robustness
Models Out of Line: A Fourier Lens on Distribution Shift Robustness
Sara Fridovich-Keil
Brian Bartoldson
James Diffenderfer
B. Kailkhura
P. Bremer
OOD
43
0
0
08 Jul 2022
Motley: Benchmarking Heterogeneity and Personalization in Federated
  Learning
Motley: Benchmarking Heterogeneity and Personalization in Federated Learning
Shan-shan Wu
Tian Li
Zachary B. Charles
Yu Xiao
Ziyu Liu
Zheng Xu
Virginia Smith
FedML
22
44
0
18 Jun 2022
Robust and Efficient Medical Imaging with Self-Supervision
Robust and Efficient Medical Imaging with Self-Supervision
Shekoofeh Azizi
Laura J. Culp
Jan Freyberg
Basil Mustafa
Sebastien Baur
...
Geoffrey E. Hinton
N. Houlsby
Alan Karthikesalingam
Mohammad Norouzi
Vivek Natarajan
OOD
63
58
0
19 May 2022
When does dough become a bagel? Analyzing the remaining mistakes on
  ImageNet
When does dough become a bagel? Analyzing the remaining mistakes on ImageNet
Vijay Vasudevan
Benjamin Caine
Raphael Gontijo-Lopes
Sara Fridovich-Keil
Rebecca Roelofs
VLM
UQCV
28
57
0
09 May 2022
CoCa: Contrastive Captioners are Image-Text Foundation Models
CoCa: Contrastive Captioners are Image-Text Foundation Models
Jiahui Yu
Zirui Wang
Vijay Vasudevan
Legg Yeung
Mojtaba Seyedhosseini
Yonghui Wu
VLM
CLIP
OffRL
45
1,253
0
04 May 2022
Data Determines Distributional Robustness in Contrastive Language Image
  Pre-training (CLIP)
Data Determines Distributional Robustness in Contrastive Language Image Pre-training (CLIP)
Alex Fang
Gabriel Ilharco
Mitchell Wortsman
Yu Wan
Vaishaal Shankar
Achal Dave
Ludwig Schmidt
VLM
OOD
11
138
0
03 May 2022
12
Next