ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.03951
  4. Cited By
Chain of Natural Language Inference for Reducing Large Language Model
  Ungrounded Hallucinations

Chain of Natural Language Inference for Reducing Large Language Model Ungrounded Hallucinations

6 October 2023
Deren Lei
Yaxi Li
Mengya Hu
Mingyu Wang
Vincent Yun
Emily Ching
Eslam Kamal
    HILM
    LRM
ArXivPDFHTML

Papers citing "Chain of Natural Language Inference for Reducing Large Language Model Ungrounded Hallucinations"

10 / 10 papers shown
Title
Calibrating Verbal Uncertainty as a Linear Feature to Reduce Hallucinations
Calibrating Verbal Uncertainty as a Linear Feature to Reduce Hallucinations
Ziwei Ji
L. Yu
Yeskendir Koishekenov
Yejin Bang
Anthony Hartshorn
Alan Schelten
Cheng Zhang
Pascale Fung
Nicola Cancedda
41
1
0
18 Mar 2025
Design and evaluation of AI copilots -- case studies of retail copilot
  templates
Design and evaluation of AI copilots -- case studies of retail copilot templates
Michal Furmakiewicz
Chang Liu
Angus Taylor
Ilya Venger
23
2
0
17 Jun 2024
Small Agent Can Also Rock! Empowering Small Language Models as
  Hallucination Detector
Small Agent Can Also Rock! Empowering Small Language Models as Hallucination Detector
Xiaoxue Cheng
Junyi Li
Wayne Xin Zhao
Hongzhi Zhang
Fuzheng Zhang
Di Zhang
Kun Gai
Ji-Rong Wen
HILM
LLMAG
27
7
0
17 Jun 2024
Beyond Chain-of-Thought: A Survey of Chain-of-X Paradigms for LLMs
Beyond Chain-of-Thought: A Survey of Chain-of-X Paradigms for LLMs
Yu Xia
Rui Wang
Xu Liu
Mingyan Li
Tong Yu
Xiang Chen
Julian McAuley
Shuai Li
LRM
41
16
0
24 Apr 2024
Re-Ex: Revising after Explanation Reduces the Factual Errors in LLM Responses
Re-Ex: Revising after Explanation Reduces the Factual Errors in LLM Responses
Juyeon Kim
Jeongeun Lee
Yoonho Chang
Chanyeol Choi
Junseong Kim
Jy-yong Sohn
KELM
LRM
38
2
0
27 Feb 2024
How Language Model Hallucinations Can Snowball
How Language Model Hallucinations Can Snowball
Muru Zhang
Ofir Press
William Merrill
Alisa Liu
Noah A. Smith
HILM
LRM
78
246
0
22 May 2023
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for
  Generative Large Language Models
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
Potsawee Manakul
Adian Liusie
Mark J. F. Gales
HILM
LRM
145
386
0
15 Mar 2023
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
315
8,261
0
28 Jan 2022
Entity-level Factual Consistency of Abstractive Text Summarization
Entity-level Factual Consistency of Abstractive Text Summarization
Feng Nan
Ramesh Nallapati
Zhiguo Wang
Cicero Nogueira dos Santos
Henghui Zhu
Dejiao Zhang
Kathleen McKeown
Bing Xiang
HILM
139
156
0
18 Feb 2021
Factual Error Correction for Abstractive Summarization Models
Factual Error Correction for Abstractive Summarization Models
Mengyao Cao
Yue Dong
Jiapeng Wu
Jackie C.K. Cheung
HILM
KELM
167
139
0
17 Oct 2020
1