ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.10543
  4. Cited By
Strong hallucinations from negation and how to fix them

Strong hallucinations from negation and how to fix them

16 February 2024
Nicholas Asher
Swarnadeep Bhar
    ReLM
    LRM
ArXivPDFHTML

Papers citing "Strong hallucinations from negation and how to fix them"

5 / 5 papers shown
Title
Hallucination Detection in Large Language Models with Metamorphic Relations
Hallucination Detection in Large Language Models with Metamorphic Relations
Borui Yang
Md Afif Al Mamun
Jie M. Zhang
Gias Uddin
HILM
59
0
0
20 Feb 2025
Entailment Semantics Can Be Extracted from an Ideal Language Model
Entailment Semantics Can Be Extracted from an Ideal Language Model
William Merrill
Alex Warstadt
Tal Linzen
81
14
0
26 Sep 2022
Entity-Based Knowledge Conflicts in Question Answering
Entity-Based Knowledge Conflicts in Question Answering
Shayne Longpre
Kartik Perisetla
Anthony Chen
Nikhil Ramesh
Chris DuBois
Sameer Singh
HILM
241
236
0
10 Sep 2021
The Factual Inconsistency Problem in Abstractive Text Summarization: A
  Survey
The Factual Inconsistency Problem in Abstractive Text Summarization: A Survey
Yi-Chong Huang
Xiachong Feng
Xiaocheng Feng
Bing Qin
HILM
128
104
0
30 Apr 2021
Language Models as Knowledge Bases?
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
404
2,576
0
03 Sep 2019
1