ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.07735
  4. Cited By
REAL Sampling: Boosting Factuality and Diversity of Open-Ended
  Generation via Asymptotic Entropy

REAL Sampling: Boosting Factuality and Diversity of Open-Ended Generation via Asymptotic Entropy

11 June 2024
Haw-Shiuan Chang
Nanyun Peng
Mohit Bansal
Anil Ramakrishna
Tagyoung Chung
    HILM
ArXivPDFHTML

Papers citing "REAL Sampling: Boosting Factuality and Diversity of Open-Ended Generation via Asymptotic Entropy"

10 / 10 papers shown
Title
Sparse Logit Sampling: Accelerating Knowledge Distillation in LLMs
Sparse Logit Sampling: Accelerating Knowledge Distillation in LLMs
Anshumann
Mohd Abbas Zaidi
Akhil Kedia
Jinwoo Ahn
Taehwak Kwon
Kangwook Lee
Haejun Lee
Joohyung Lee
FedML
71
0
0
21 Mar 2025
Explaining and Improving Contrastive Decoding by Extrapolating the
  Probabilities of a Huge and Hypothetical LM
Explaining and Improving Contrastive Decoding by Extrapolating the Probabilities of a Huge and Hypothetical LM
Haw-Shiuan Chang
Nanyun Peng
Mohit Bansal
Anil Ramakrishna
Tagyoung Chung
18
1
0
03 Nov 2024
Entropy Guided Extrapolative Decoding to Improve Factuality in Large
  Language Models
Entropy Guided Extrapolative Decoding to Improve Factuality in Large Language Models
Souvik Das
Lifeng Jin
Linfeng Song
Haitao Mi
Baolin Peng
Dong Yu
HILM
35
2
0
14 Apr 2024
How Language Model Hallucinations Can Snowball
How Language Model Hallucinations Can Snowball
Muru Zhang
Ofir Press
William Merrill
Alisa Liu
Noah A. Smith
HILM
LRM
75
246
0
22 May 2023
The Internal State of an LLM Knows When It's Lying
The Internal State of an LLM Knows When It's Lying
A. Azaria
Tom Michael Mitchell
HILM
210
297
0
26 Apr 2023
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for
  Generative Large Language Models
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
Potsawee Manakul
Adian Liusie
Mark J. F. Gales
HILM
LRM
145
386
0
15 Mar 2023
Faithfulness-Aware Decoding Strategies for Abstractive Summarization
Faithfulness-Aware Decoding Strategies for Abstractive Summarization
David Wan
Mengwen Liu
Kathleen McKeown
Markus Dreyer
Mohit Bansal
HILM
111
20
0
06 Mar 2023
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
297
3,163
0
21 Mar 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
A Token-level Reference-free Hallucination Detection Benchmark for
  Free-form Text Generation
A Token-level Reference-free Hallucination Detection Benchmark for Free-form Text Generation
Tianyu Liu
Yizhe Zhang
Chris Brockett
Yi Mao
Zhifang Sui
Weizhu Chen
W. Dolan
HILM
209
140
0
18 Apr 2021
1