ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.02976
  4. Cited By
Hallucination Detection in LLMs: Fast and Memory-Efficient Finetuned
  Models

Hallucination Detection in LLMs: Fast and Memory-Efficient Finetuned Models

4 September 2024
Gabriel Y. Arteaga
Thomas B. Schon
Nicolas Pielawski
ArXivPDFHTML

Papers citing "Hallucination Detection in LLMs: Fast and Memory-Efficient Finetuned Models"

2 / 2 papers shown
Title
Uncertainty Distillation: Teaching Language Models to Express Semantic Confidence
Uncertainty Distillation: Teaching Language Models to Express Semantic Confidence
Sophia Hager
David Mueller
Kevin Duh
Nicholas Andrews
67
0
0
18 Mar 2025
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
276
5,661
0
05 Dec 2016
1