ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.01812
  4. Cited By
Post-Abstention: Towards Reliably Re-Attempting the Abstained Instances
  in QA

Post-Abstention: Towards Reliably Re-Attempting the Abstained Instances in QA

Annual Meeting of the Association for Computational Linguistics (ACL), 2023
2 May 2023
Neeraj Varshney
Chitta Baral
ArXiv (abs)PDFHTML

Papers citing "Post-Abstention: Towards Reliably Re-Attempting the Abstained Instances in QA"

11 / 11 papers shown
Efficiently Deploying LLMs with Controlled Risk
Efficiently Deploying LLMs with Controlled Risk
Michael J. Zellinger
Matt Thomson
278
3
0
03 Oct 2024
Crowd-Calibrator: Can Annotator Disagreement Inform Calibration in
  Subjective Tasks?
Crowd-Calibrator: Can Annotator Disagreement Inform Calibration in Subjective Tasks?
Urja Khurana
Eric T. Nalisnick
Antske Fokkens
Swabha Swayamdipta
395
7
0
26 Aug 2024
Do LLMs Know When to NOT Answer? Investigating Abstention Abilities of
  Large Language Models
Do LLMs Know When to NOT Answer? Investigating Abstention Abilities of Large Language Models
Nishanth Madhusudhan
Sathwik Tejaswi Madhusudhan
Vikas Yadav
Masoud Hashemi
353
26
0
23 Jul 2024
Certainly Uncertain: A Benchmark and Metric for Multimodal Epistemic and
  Aleatoric Awareness
Certainly Uncertain: A Benchmark and Metric for Multimodal Epistemic and Aleatoric Awareness
Khyathi Chandu
Linjie Li
Anas Awadalla
Ximing Lu
Jae Sung Park
Jack Hessel
Lijuan Wang
Yejin Choi
322
6
0
02 Jul 2024
LLMs' Reading Comprehension Is Affected by Parametric Knowledge and Struggles with Hypothetical Statements
LLMs' Reading Comprehension Is Affected by Parametric Knowledge and Struggles with Hypothetical Statements
Victoria Basmov
Yoav Goldberg
Reut Tsarfaty
297
17
0
09 Apr 2024
Can NLP Models Ídentify', 'Distinguish', and 'Justify' Questions that
  Don't have a Definitive Answer?
Can NLP Models Ídentify', 'Distinguish', and 'Justify' Questions that Don't have a Definitive Answer?
Ayushi Agarwal
Nisarg Patel
Neeraj Varshney
Mihir Parmar
Pavan Mallina
Aryan Bhavin Shah
Srihari Sangaraju
Tirth Patel
Nihar Thakkar
Chitta Baral
ELM
204
4
0
08 Sep 2023
A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of
  LLMs by Validating Low-Confidence Generation
A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation
Neeraj Varshney
Wenlin Yao
Hongming Zhang
Jianshu Chen
Dong Yu
HILM
386
222
0
08 Jul 2023
Generating with Confidence: Uncertainty Quantification for Black-box
  Large Language Models
Generating with Confidence: Uncertainty Quantification for Black-box Large Language Models
Zhen Lin
Shubhendu Trivedi
Jimeng Sun
HILM
538
230
0
30 May 2023
Mitigating Temporal Misalignment by Discarding Outdated Facts
Mitigating Temporal Misalignment by Discarding Outdated FactsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Michael J.Q. Zhang
Eunsol Choi
KELMHILM
273
24
0
24 May 2023
Ambiguity Meets Uncertainty: Investigating Uncertainty Estimation for
  Word Sense Disambiguation
Ambiguity Meets Uncertainty: Investigating Uncertainty Estimation for Word Sense DisambiguationAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Zhuo Liu
Ying Liu
UD
176
5
0
22 May 2023
A Unified Evaluation Framework for Novelty Detection and Accommodation
  in NLP with an Instantiation in Authorship Attribution
A Unified Evaluation Framework for Novelty Detection and Accommodation in NLP with an Instantiation in Authorship AttributionAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Neeraj Varshney
Himanshu Gupta
Eric Robertson
Yinan Han
Chitta Baral
176
1
0
08 May 2023
1