ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.04021
  4. Cited By
A Study on the Calibration of In-context Learning
v1v2v3v4 (latest)

A Study on the Calibration of In-context Learning

7 December 2023
Hanlin Zhang
Yi-Fan Zhang
Yaodong Yu
Dhruv Madeka
Dean Phillips Foster
Eric Xing
Hima Lakkaraju
Sham Kakade
ArXiv (abs)PDFHTML

Papers citing "A Study on the Calibration of In-context Learning"

12 / 12 papers shown
COM-BOM: Bayesian Exemplar Search for Efficiently Exploring the Accuracy-Calibration Pareto Frontier
COM-BOM: Bayesian Exemplar Search for Efficiently Exploring the Accuracy-Calibration Pareto Frontier
Gaoxiang Luo
Aryan Deshwal
121
0
0
01 Oct 2025
IA2: Alignment with ICL Activations Improves Supervised Fine-Tuning
IA2: Alignment with ICL Activations Improves Supervised Fine-Tuning
Aayush Mishra
Daniel Khashabi
Anqi Liu
245
1
0
26 Sep 2025
Using Large Language Models to Categorize Strategic Situations and Decipher Motivations Behind Human Behaviors
Using Large Language Models to Categorize Strategic Situations and Decipher Motivations Behind Human BehaviorsProceedings of the National Academy of Sciences of the United States of America (PNAS), 2025
Yutong Xie
Qiaozhu Mei
Walter Yuan
Matthew O Jackson
434
1
0
20 Mar 2025
Mind the Confidence Gap: Overconfidence, Calibration, and Distractor Effects in Large Language Models
Mind the Confidence Gap: Overconfidence, Calibration, and Distractor Effects in Large Language Models
Prateek Chhikara
653
22
0
16 Feb 2025
In-Context Learning (and Unlearning) of Length Biases
In-Context Learning (and Unlearning) of Length BiasesNorth American Chapter of the Association for Computational Linguistics (NAACL), 2025
S. Schoch
Yangfeng Ji
380
3
0
10 Feb 2025
Bypassing the Exponential Dependency: Looped Transformers Efficiently Learn In-context by Multi-step Gradient Descent
Bypassing the Exponential Dependency: Looped Transformers Efficiently Learn In-context by Multi-step Gradient DescentInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2024
Bo Chen
Xiaoyu Li
Yingyu Liang
Zhenmei Shi
Zhao Song
496
27
0
15 Oct 2024
Calibrate to Discriminate: Improve In-Context Learning with Label-Free
  Comparative Inference
Calibrate to Discriminate: Improve In-Context Learning with Label-Free Comparative Inference
Wei Cheng
Tianlu Wang
Yanmin Ji
Fan Yang
Keren Tan
Yiyu Zheng
270
0
0
03 Oct 2024
Calibrating Language Models with Adaptive Temperature Scaling
Calibrating Language Models with Adaptive Temperature ScalingConference on Empirical Methods in Natural Language Processing (EMNLP), 2024
Johnathan Xie
Annie S. Chen
Yoonho Lee
Eric Mitchell
Chelsea Finn
272
39
0
29 Sep 2024
Why Larger Language Models Do In-context Learning Differently?
Why Larger Language Models Do In-context Learning Differently?
Zhenmei Shi
Junyi Wei
Zhuoyan Xu
Yingyu Liang
273
49
0
30 May 2024
Uncertainty Estimation and Quantification for LLMs: A Simple Supervised
  Approach
Uncertainty Estimation and Quantification for LLMs: A Simple Supervised Approach
Linyu Liu
Yu Pan
Xiaocheng Li
Guanting Chen
381
77
0
24 Apr 2024
Think Twice Before Trusting: Self-Detection for Large Language Models
  through Comprehensive Answer Reflection
Think Twice Before Trusting: Self-Detection for Large Language Models through Comprehensive Answer ReflectionConference on Empirical Methods in Natural Language Processing (EMNLP), 2024
Moxin Li
Wenjie Wang
Fuli Feng
Fengbin Zhu
Qifan Wang
Tat-Seng Chua
HILMLRM
382
34
0
15 Mar 2024
Understanding the Effects of Iterative Prompting on Truthfulness
Understanding the Effects of Iterative Prompting on TruthfulnessInternational Conference on Machine Learning (ICML), 2024
Satyapriya Krishna
Chirag Agarwal
Himabindu Lakkaraju
HILM
244
21
0
09 Feb 2024
1
Page 1 of 1