ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.05930
  4. Cited By
SH2: Self-Highlighted Hesitation Helps You Decode More Truthfully
v1v2v3 (latest)

SH2: Self-Highlighted Hesitation Helps You Decode More Truthfully

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024
11 January 2024
Jushi Kai
Hai Hu
Zhouhan Lin
    HILM
ArXiv (abs)PDFHTML

Papers citing "SH2: Self-Highlighted Hesitation Helps You Decode More Truthfully"

10 / 10 papers shown
Title
Consistency Is the Key: Detecting Hallucinations in LLM Generated Text By Checking Inconsistencies About Key Facts
Consistency Is the Key: Detecting Hallucinations in LLM Generated Text By Checking Inconsistencies About Key Facts
Raavi Gupta
Pranav Hari Panicker
S. Bhatia
Ganesh Ramakrishnan
HILM
96
0
0
15 Nov 2025
MLP Memory: A Retriever-Pretrained Memory for Large Language Models
MLP Memory: A Retriever-Pretrained Memory for Large Language Models
Rubin Wei
Jiaqi Cao
Jiarui Wang
Jushi Kai
Qipeng Guo
Bowen Zhou
Zhouhan Lin
RALM
205
0
0
03 Aug 2025
LayerCake: Token-Aware Contrastive Decoding within Large Language Model Layers
LayerCake: Token-Aware Contrastive Decoding within Large Language Model Layers
Jingze Zhu
Y. Wu
Wenbo Zhu
Jiawang Cao
Y. Zheng
Jiawei Chen
Xu Yang
Bernt Schiele
Jonas Fischer
Xinting Hu
OffRL
162
0
0
06 Jul 2025
Expanding before Inferring: Enhancing Factuality in Large Language Models through Premature Layers Interpolation
Expanding before Inferring: Enhancing Factuality in Large Language Models through Premature Layers Interpolation
Dingwei Chen
Ziqiang Liu
Feiteng Fang
Chak Tou Leong
Shiwen Ni
A. Argha
Hamid Alinejad-Rokny
Min Yang
Chengming Li
KELMHILM
173
2
0
03 Jun 2025
Improving Factuality in Large Language Models via Decoding-Time Hallucinatory and Truthful Comparators
Improving Factuality in Large Language Models via Decoding-Time Hallucinatory and Truthful ComparatorsAAAI Conference on Artificial Intelligence (AAAI), 2024
Jinjie Wei
Dongling Xiao
Jinjie Wei
Mingcheng Li
Zhaoyu Chen
Ke Li
Li Zhang
HILM
463
14
0
28 Jan 2025
HaloScope: Harnessing Unlabeled LLM Generations for Hallucination
  Detection
HaloScope: Harnessing Unlabeled LLM Generations for Hallucination DetectionNeural Information Processing Systems (NeurIPS), 2024
Xuefeng Du
Chaowei Xiao
Yixuan Li
HILM
220
56
0
26 Sep 2024
Paying More Attention to Image: A Training-Free Method for Alleviating
  Hallucination in LVLMs
Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs
Shiping Liu
Kecheng Zheng
Wei Chen
MLLM
217
106
0
31 Jul 2024
Mitigating Large Language Model Hallucination with Faithful Finetuning
Mitigating Large Language Model Hallucination with Faithful Finetuning
Minda Hu
Bowei He
Yufei Wang
Liangyou Li
Chen Ma
Irwin King
HILM
251
20
0
17 Jun 2024
TruthX: Alleviating Hallucinations by Editing Large Language Models in
  Truthful Space
TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space
Shaolei Zhang
Tian Yu
Yang Feng
HILMKELM
247
78
0
27 Feb 2024
Generating Benchmarks for Factuality Evaluation of Language Models
Generating Benchmarks for Factuality Evaluation of Language ModelsConference of the European Chapter of the Association for Computational Linguistics (EACL), 2023
Dor Muhlgay
Ori Ram
Inbal Magar
Yoav Levine
Nir Ratner
Yonatan Belinkov
Omri Abend
Kevin Leyton-Brown
Amnon Shashua
Y. Shoham
HILM
183
122
0
13 Jul 2023
1