ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.12311
52
2

Open Domain Question Answering with Conflicting Contexts

16 October 2024
Siyi Liu
Qiang Ning
Kishaloy Halder
Wei Xiao
Zheng Qi
Phu Mon Htut
Yi Zhang
Neha Anna John
Bonan Min
Yassine Benajiba
Dan Roth
    LLMAG
ArXivPDFHTML
Abstract

Open domain question answering systems frequently rely on information retrieved from large collections of text (such as the Web) to answer questions. However, such collections of text often contain conflicting information, and indiscriminately depending on this information may result in untruthful and inaccurate answers. To understand the gravity of this problem, we collect a human-annotated dataset, Question Answering with Conflicting Contexts (QACC), and find that as much as 25% of unambiguous, open domain questions can lead to conflicting contexts when retrieved using Google Search. We evaluate and benchmark three powerful Large Language Models (LLMs) with our dataset QACC and demonstrate their limitations in effectively addressing questions with conflicting information. To explore how humans reason through conflicting contexts, we request our annotators to provide explanations for their selections of correct answers. We demonstrate that by finetuning LLMs to explain their answers, we can introduce richer information into their training that guide them through the process of reasoning with conflicting contexts.

View on arXiv
@article{liu2025_2410.12311,
  title={ Open Domain Question Answering with Conflicting Contexts },
  author={ Siyi Liu and Qiang Ning and Kishaloy Halder and Wei Xiao and Zheng Qi and Phu Mon Htut and Yi Zhang and Neha Anna John and Bonan Min and Yassine Benajiba and Dan Roth },
  journal={arXiv preprint arXiv:2410.12311},
  year={ 2025 }
}
Comments on this paper