ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.06829
  4. Cited By
Joint Energy-based Model Training for Better Calibrated Natural Language
  Understanding Models
v1v2 (latest)

Joint Energy-based Model Training for Better Calibrated Natural Language Understanding Models

18 January 2021
Tianxing He
Bryan McCann
Caiming Xiong
Ehsan Hosseini-Asl
ArXiv (abs)PDFHTML

Papers citing "Joint Energy-based Model Training for Better Calibrated Natural Language Understanding Models"

10 / 10 papers shown
Title
Entriever: Energy-based Retriever for Knowledge-Grounded Dialog Systems
Entriever: Energy-based Retriever for Knowledge-Grounded Dialog Systems
Yucheng Cai
Ke Li
Y. Huang
Junlan Feng
Zhijian Ou
27
0
0
31 May 2025
Energy-based Preference Optimization for Test-time Adaptation
Energy-based Preference Optimization for Test-time Adaptation
Yewon Han
Seoyun Yang
Taesup Kim
TTA
286
0
0
26 May 2025
LSEBMCL: A Latent Space Energy-Based Model for Continual Learning
LSEBMCL: A Latent Space Energy-Based Model for Continual Learning
Xiaodi Li
Dingcheng Li
Rujun Gao
Mahmoud Zamani
Latifur Khan
CLLKELM
67
0
0
09 Jan 2025
Calibration Meets Explanation: A Simple and Effective Approach for Model
  Confidence Estimates
Calibration Meets Explanation: A Simple and Effective Approach for Model Confidence Estimates
Dongfang Li
Baotian Hu
Qingcai Chen
49
8
0
06 Nov 2022
Consistent Training via Energy-Based GFlowNets for Modeling Discrete
  Joint Distributions
Consistent Training via Energy-Based GFlowNets for Modeling Discrete Joint Distributions
C. Ekbote
Moksh Jain
Payel Das
Yoshua Bengio
84
4
0
01 Nov 2022
EBMs vs. CL: Exploring Self-Supervised Visual Pretraining for Visual
  Question Answering
EBMs vs. CL: Exploring Self-Supervised Visual Pretraining for Visual Question Answering
Violetta Shevchenko
Ehsan Abbasnejad
A. Dick
Anton Van Den Hengel
Damien Teney
71
0
0
29 Jun 2022
On Reinforcement Learning and Distribution Matching for Fine-Tuning
  Language Models with no Catastrophic Forgetting
On Reinforcement Learning and Distribution Matching for Fine-Tuning Language Models with no Catastrophic Forgetting
Tomasz Korbak
Hady ElSahar
Germán Kruszewski
Marc Dymetman
CLL
105
57
0
01 Jun 2022
On the Calibration of Pre-trained Language Models using Mixup Guided by
  Area Under the Margin and Saliency
On the Calibration of Pre-trained Language Models using Mixup Guided by Area Under the Margin and Saliency
Seohong Park
Cornelia Caragea
UQCV
60
37
0
14 Mar 2022
Sampling from Discrete Energy-Based Models with Quality/Efficiency
  Trade-offs
Sampling from Discrete Energy-Based Models with Quality/Efficiency Trade-offs
B. Eikema
Germán Kruszewski
Hady ElSahar
Marc Dymetman
69
3
0
10 Dec 2021
Oops I Took A Gradient: Scalable Sampling for Discrete Distributions
Oops I Took A Gradient: Scalable Sampling for Discrete Distributions
Will Grathwohl
Kevin Swersky
Milad Hashemi
David Duvenaud
Chris J. Maddison
BDL
84
97
0
08 Feb 2021
1