ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19607
  4. Cited By
Energy-based Preference Optimization for Test-time Adaptation

Energy-based Preference Optimization for Test-time Adaptation

26 May 2025
Yewon Han
Seoyun Yang
Taesup Kim
    TTA
ArXiv (abs)PDFHTML

Papers citing "Energy-based Preference Optimization for Test-time Adaptation"

34 / 34 papers shown
Title
Dynamic Loss-Based Sample Reweighting for Improved Large Language Model Pretraining
Dynamic Loss-Based Sample Reweighting for Improved Large Language Model Pretraining
Daouda Sow
Herbert Woisetschläger
Saikiran Bulusu
Shiqiang Wang
Hans-Arno Jacobsen
Yingbin Liang
139
6
0
10 Feb 2025
STAMP: Outlier-Aware Test-Time Adaptation with Stable Memory Replay
STAMP: Outlier-Aware Test-Time Adaptation with Stable Memory Replay
Yongcan Yu
Lijun Sheng
Ran He
Jian Liang
TTA
87
6
0
22 Jul 2024
Gradient Reweighting: Towards Imbalanced Class-Incremental Learning
Gradient Reweighting: Towards Imbalanced Class-Incremental Learning
Jiangpeng He
Fengqing Zhu
CLL
99
22
0
28 Feb 2024
Resilient Practical Test-Time Adaptation: Soft Batch Normalization
  Alignment and Entropy-driven Memory Bank
Resilient Practical Test-Time Adaptation: Soft Batch Normalization Alignment and Entropy-driven Memory Bank
Xingzhi Zhou
Zhiliang Tian
Ka Chun Cheung
Simon See
Nevin L. Zhang
73
3
0
26 Jan 2024
TEA: Test-time Energy Adaptation
TEA: Test-time Energy Adaptation
Yige Yuan
Bingbing Xu
Liang Hou
Fei Sun
Huawei Shen
Xueqi Cheng
TTAVLM
83
11
0
24 Nov 2023
AR-TTA: A Simple Method for Real-World Continual Test-Time Adaptation
AR-TTA: A Simple Method for Real-World Continual Test-Time Adaptation
Damian Sójka
Sebastian Cygert
Bartlomiej Twardowski
Tomasz Trzciñski
TTA
53
8
0
18 Sep 2023
Energy Discrepancies: A Score-Independent Loss for Energy-Based Models
Energy Discrepancies: A Score-Independent Loss for Energy-Based Models
Tobias Schröder
Zijing Ou
Jen Ning Lim
Yingzhen Li
Sebastian J. Vollmer
Andrew B. Duncan
69
7
0
12 Jul 2023
Direct Preference Optimization: Your Language Model is Secretly a Reward
  Model
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Rafael Rafailov
Archit Sharma
E. Mitchell
Stefano Ermon
Christopher D. Manning
Chelsea Finn
ALM
405
4,184
0
29 May 2023
EGC: Image Generation and Classification via a Diffusion Energy-Based
  Model
EGC: Image Generation and Classification via a Diffusion Energy-Based Model
Qiushan Guo
Chuofan Ma
Yi Jiang
Zehuan Yuan
Yizhou Yu
Ping Luo
DiffM
82
8
0
04 Apr 2023
Robust Test-Time Adaptation in Dynamic Scenarios
Robust Test-Time Adaptation in Dynamic Scenarios
Longhui Yuan
Binhui Xie
Shuangliang Li
TTA
101
126
0
24 Mar 2023
Towards Stable Test-Time Adaptation in Dynamic Wild World
Towards Stable Test-Time Adaptation in Dynamic Wild World
Shuaicheng Niu
Jiaxiang Wu
Yifan Zhang
Z. Wen
Yaofo Chen
P. Zhao
Mingkui Tan
TTA
116
281
0
24 Feb 2023
Robust Mean Teacher for Continual and Gradual Test-Time Adaptation
Robust Mean Teacher for Continual and Gradual Test-Time Adaptation
Mario Döbler
Robert A. Marsden
Bin Yang
OODTTA
76
90
0
23 Nov 2022
NOTE: Robust Continual Test-time Adaptation Against Temporal Correlation
NOTE: Robust Continual Test-time Adaptation Against Temporal Correlation
Taesik Gong
Jongheon Jeong
Taewon Kim
Yewon Kim
Jinwoo Shin
Sung-Ju Lee
OODTTA
129
132
0
10 Aug 2022
Efficient Test-Time Model Adaptation without Forgetting
Efficient Test-Time Model Adaptation without Forgetting
Shuaicheng Niu
Jiaxiang Wu
Yifan Zhang
Yaofo Chen
S. Zheng
P. Zhao
Mingkui Tan
OODVLMTTA
100
353
0
06 Apr 2022
Continual Test-Time Domain Adaptation
Continual Test-Time Domain Adaptation
Qin Wang
Olga Fink
Luc Van Gool
Dengxin Dai
OODTTA
114
432
0
25 Mar 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLMALM
930
13,272
0
04 Mar 2022
JEM++: Improved Techniques for Training JEM
JEM++: Improved Techniques for Training JEM
Xiulong Yang
Shihao Ji
AAMLVLM
70
31
0
19 Sep 2021
Energy-Based Open-World Uncertainty Modeling for Confidence Calibration
Energy-Based Open-World Uncertainty Modeling for Confidence Calibration
Yezhen Wang
Yue Liu
Tong Che
Kaiyang Zhou
Ziwei Liu
Dongsheng Li
UQCV
93
51
0
27 Jul 2021
Exponentiated Gradient Reweighting for Robust Training Under Label Noise
  and Beyond
Exponentiated Gradient Reweighting for Robust Training Under Label Noise and Beyond
Negin Majidi
Ehsan Amid
Hossein Talebi
Manfred K. Warmuth
NoLa
63
19
0
03 Apr 2021
Joint Energy-based Model Training for Better Calibrated Natural Language
  Understanding Models
Joint Energy-based Model Training for Better Calibrated Natural Language Understanding Models
Tianxing He
Bryan McCann
Caiming Xiong
Ehsan Hosseini-Asl
45
22
0
18 Jan 2021
How to Train Your Energy-Based Models
How to Train Your Energy-Based Models
Yang Song
Diederik P. Kingma
DiffM
102
265
0
09 Jan 2021
RobustBench: a standardized adversarial robustness benchmark
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
355
707
0
19 Oct 2020
Energy-based Out-of-distribution Detection
Energy-based Out-of-distribution Detection
Weitang Liu
Xiaoyun Wang
John Douglas Owens
Yixuan Li
OODD
352
1,382
0
08 Oct 2020
Improving robustness against common corruptions by covariate shift
  adaptation
Improving robustness against common corruptions by covariate shift adaptation
Steffen Schneider
E. Rusak
L. Eck
Oliver Bringmann
Wieland Brendel
Matthias Bethge
VLM
101
485
0
30 Jun 2020
Residual Energy-Based Models for Text Generation
Residual Energy-Based Models for Text Generation
Yuntian Deng
A. Bakhtin
Myle Ott
Arthur Szlam
MarcÁurelio Ranzato
102
133
0
22 Apr 2020
Do We Really Need to Access the Source Data? Source Hypothesis Transfer
  for Unsupervised Domain Adaptation
Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation
Jian Liang
Dapeng Hu
Jiashi Feng
138
1,255
0
20 Feb 2020
Your Classifier is Secretly an Energy Based Model and You Should Treat
  it Like One
Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One
Will Grathwohl
Kuan-Chieh Wang
J. Jacobsen
David Duvenaud
Mohammad Norouzi
Kevin Swersky
VLM
108
547
0
06 Dec 2019
Learning Non-Convergent Non-Persistent Short-Run MCMC Toward
  Energy-Based Model
Learning Non-Convergent Non-Persistent Short-Run MCMC Toward Energy-Based Model
Erik Nijkamp
Mitch Hill
Song-Chun Zhu
Ying Nian Wu
124
214
0
22 Apr 2019
Benchmarking Neural Network Robustness to Common Corruptions and
  Perturbations
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
Dan Hendrycks
Thomas G. Dietterich
OODVLM
200
3,462
0
28 Mar 2019
On Calibration of Modern Neural Networks
On Calibration of Modern Neural Networks
Chuan Guo
Geoff Pleiss
Yu Sun
Kilian Q. Weinberger
UQCV
301
5,891
0
14 Jun 2017
A Baseline for Detecting Misclassified and Out-of-Distribution Examples
  in Neural Networks
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
Dan Hendrycks
Kevin Gimpel
UQCV
223
3,488
0
07 Oct 2016
Deep Directed Generative Models with Energy-Based Probability Estimation
Deep Directed Generative Models with Energy-Based Probability Estimation
Taesup Kim
Yoshua Bengio
GAN
100
136
0
10 Jun 2016
Wide Residual Networks
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
403
8,013
0
23 May 2016
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
2.4K
150,597
0
22 Dec 2014
1