ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.17287
  4. Cited By
When to Trust LLMs: Aligning Confidence with Response Quality

When to Trust LLMs: Aligning Confidence with Response Quality

26 April 2024
Shuchang Tao
Liuyi Yao
Hanxing Ding
Yuexiang Xie
Qi Cao
Fei Sun
Jinyang Gao
Huawei Shen
Bolin Ding
ArXivPDFHTML

Papers citing "When to Trust LLMs: Aligning Confidence with Response Quality"

3 / 3 papers shown
Title
Enhancing LLM Reliability via Explicit Knowledge Boundary Modeling
Hang Zheng
Hongshen Xu
Yuncong Liu
Lu Chen
Pascale Fung
Kai Yu
78
2
0
04 Mar 2025
Self-RAG: Learning to Retrieve, Generate, and Critique through
  Self-Reflection
Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
Akari Asai
Zeqiu Wu
Yizhong Wang
Avirup Sil
Hannaneh Hajishirzi
RALM
144
600
0
17 Oct 2023
Fine-Tuning Language Models from Human Preferences
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
275
1,561
0
18 Sep 2019
1