ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.07461
  4. Cited By
"Confidently Nonsensical?'': A Critical Survey on the Perspectives and
  Challenges of 'Hallucinations' in NLP

"Confidently Nonsensical?'': A Critical Survey on the Perspectives and Challenges of 'Hallucinations' in NLP

11 April 2024
Pranav Narayanan Venkit
Tatiana Chakravorti
Vipul Gupta
Heidi Biggs
Mukund Srinath
Koustava Goswami
Sarah Rajtmajer
Shomir Wilson
    HILM
ArXivPDFHTML

Papers citing ""Confidently Nonsensical?'': A Critical Survey on the Perspectives and Challenges of 'Hallucinations' in NLP"

9 / 9 papers shown
Title
From Melting Pots to Misrepresentations: Exploring Harms in Generative
  AI
From Melting Pots to Misrepresentations: Exploring Harms in Generative AI
Sanjana Gautam
Pranav Narayanan Venkit
Sourojit Ghosh
37
13
0
16 Mar 2024
Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus
Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus
Tianhang Zhang
Lin Qiu
Qipeng Guo
Cheng Deng
Yue Zhang
Zheng-Wei Zhang
Cheng Zhou
Xinbing Wang
Luoyi Fu
HILM
67
46
0
22 Nov 2023
CRUSH4SQL: Collective Retrieval Using Schema Hallucination For Text2SQL
CRUSH4SQL: Collective Retrieval Using Schema Hallucination For Text2SQL
Mayank Kothyari
Dhruva Dhingra
Sunita Sarawagi
Soumen Chakrabarti
36
14
0
02 Nov 2023
"According to ...": Prompting Language Models Improves Quoting from
  Pre-Training Data
"According to ...": Prompting Language Models Improves Quoting from Pre-Training Data
Orion Weller
Marc Marone
Nathaniel Weir
Dawn J Lawrie
Daniel Khashabi
Benjamin Van Durme
HILM
66
44
0
22 May 2023
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale
  Instructions
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
Minghao Wu
Abdul Waheed
Chiyu Zhang
Muhammad Abdul-Mageed
Alham Fikri Aji
ALM
124
115
0
27 Apr 2023
Models See Hallucinations: Evaluating the Factuality in Video Captioning
Models See Hallucinations: Evaluating the Factuality in Video Captioning
Hui Liu
Xiaojun Wan
HILM
20
10
0
06 Mar 2023
Entity-Based Knowledge Conflicts in Question Answering
Entity-Based Knowledge Conflicts in Question Answering
Shayne Longpre
Kartik Perisetla
Anthony Chen
Nikhil Ramesh
Chris DuBois
Sameer Singh
HILM
235
236
0
10 Sep 2021
Hallucinated but Factual! Inspecting the Factuality of Hallucinations in
  Abstractive Summarization
Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization
Mengyao Cao
Yue Dong
Jackie C.K. Cheung
HILM
170
144
0
30 Aug 2021
Entity-level Factual Consistency of Abstractive Text Summarization
Entity-level Factual Consistency of Abstractive Text Summarization
Feng Nan
Ramesh Nallapati
Zhiguo Wang
Cicero Nogueira dos Santos
Henghui Zhu
Dejiao Zhang
Kathleen McKeown
Bing Xiang
HILM
142
156
0
18 Feb 2021
1