ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.18418
44
11

Know Your Limits: A Survey of Abstention in Large Language Models

25 July 2024
Bingbing Wen
Jihan Yao
Shangbin Feng
Chenjun Xu
Yulia Tsvetkov
Bill Howe
Lucy Lu Wang
ArXivPDFHTML
Abstract

Abstention, the refusal of large language models (LLMs) to provide an answer, is increasingly recognized for its potential to mitigate hallucinations and enhance safety in LLM systems. In this survey, we introduce a framework to examine abstention from three perspectives: the query, the model, and human values. We organize the literature on abstention methods, benchmarks, and evaluation metrics using this framework, and discuss merits and limitations of prior work. We further identify and motivate areas for future research, such as whether abstention can be achieved as a meta-capability that transcends specific tasks or domains, and opportunities to optimize abstention abilities in specific contexts. In doing so, we aim to broaden the scope and impact of abstention methodologies in AI systems.

View on arXiv
@article{wen2025_2407.18418,
  title={ Know Your Limits: A Survey of Abstention in Large Language Models },
  author={ Bingbing Wen and Jihan Yao and Shangbin Feng and Chenjun Xu and Yulia Tsvetkov and Bill Howe and Lucy Lu Wang },
  journal={arXiv preprint arXiv:2407.18418},
  year={ 2025 }
}
Comments on this paper