96

A2^2Search: Ambiguity-Aware Question Answering with Reinforcement Learning

Main:9 Pages
7 Figures
Bibliography:3 Pages
13 Tables
Appendix:35 Pages
Abstract

Recent advances in Large Language Models (LLMs) and Reinforcement Learning (RL) have led to strong performance in open-domain question answering (QA). However, existing models still struggle with questions that admit multiple valid answers. Standard QA benchmarks, which typically assume a single gold answer, overlook this reality and thus produce inappropriate training signals. Existing attempts to handle ambiguity often rely on costly manual annotation, which is difficult to scale to multi-hop datasets such as HotpotQA and MuSiQue. In this paper, we present A2^2Search, an annotation-free, end-to-end training framework to recognize and handle ambiguity. At its core is an automated pipeline that detects ambiguous questions and gathers alternative answers via trajectory sampling and evidence verification. The model is then optimized with RL using a carefully designed AnsF1\mathrm{AnsF1} reward, which naturally accommodates multiple answers. Experiments on eight open-domain QA benchmarks demonstrate that A2^2Search achieves new state-of-the-art performance. With only a single rollout, A2^2Search-7B yields an average AnsF1@1\mathrm{AnsF1}@1 score of 48.4%48.4\% across four multi-hop benchmarks, outperforming all strong baselines, including the substantially larger ReSearch-32B (46.2%46.2\%). Extensive analyses further show that A2^2Search resolves ambiguity and generalizes across benchmarks, highlighting that embracing ambiguity is essential for building more reliable QA systems. Our code, data, and model weights can be found atthis https URL

View on arXiv
Comments on this paper