56
5

AdaCAD: Adaptively Decoding to Balance Conflicts between Contextual and Parametric Knowledge

Abstract

Knowledge conflict arises from discrepancies between information in the context of a large language model (LLM) and the knowledge stored in its parameters. This can hurt performance when using standard decoding techniques, which tend to ignore the context. Existing test-time contrastive methods seek to address this by comparing the LLM's output distribution with and without the context and adjust the model according to the contrast between them. However, we find that these methods frequently misjudge the degree of conflict and struggle to handle instances that vary in their amount of conflict, with static methods over-adjusting when conflict is absent. We propose a fine-grained, instance-level approach called AdaCAD, which dynamically infers the weight of adjustment based on the degree of conflict, as measured by the Jensen-Shannon divergence between distributions representing contextual and parametric knowledge. Across four LLMs, six question-answering (QA) and three summarization datasets, we demonstrate that ADACAD consistently outperforms other decoding baselines with average QA accuracy gains of 14.21% (absolute) over a static contrastive baseline, and improves the factuality of summaries by 6.19 (AlignScore). Lastly, we show that while contrastive baselines hurt performance when conflict is absent, ADACAD mitigates these losses, making it more applicable to real-world datasets in which some examples have conflict and others do not.

View on arXiv
@article{wang2025_2409.07394,
  title={ AdaCAD: Adaptively Decoding to Balance Conflicts between Contextual and Parametric Knowledge },
  author={ Han Wang and Archiki Prasad and Elias Stengel-Eskin and Mohit Bansal },
  journal={arXiv preprint arXiv:2409.07394},
  year={ 2025 }
}
Comments on this paper