312

Context-DPO: Aligning Language Models for Context-Faithfulness

Annual Meeting of the Association for Computational Linguistics (ACL), 2024
Main:7 Pages
7 Figures
Bibliography:5 Pages
11 Tables
Appendix:9 Pages
Abstract

Reliable responses from large language models (LLMs) require adherence to user instructions and retrieved information. While alignment techniques help LLMs align with human intentions and values, improving context-faithfulness through alignment remains underexplored. To address this, we propose Context-DPO\textbf{Context-DPO}, the first alignment method specifically designed to enhance LLMs' context-faithfulness. We introduce ConFiQA\textbf{ConFiQA}, a benchmark that simulates Retrieval-Augmented Generation (RAG) scenarios with knowledge conflicts to evaluate context-faithfulness. By leveraging faithful and stubborn responses to questions with provided context from ConFiQA, our Context-DPO aligns LLMs through direct preference optimization. Extensive experiments demonstrate that our Context-DPO significantly improves context-faithfulness, achieving 35% to 280% improvements on popular open-source models. Further analysis demonstrates that Context-DPO preserves LLMs' generative capabilities while providing interpretable insights into context utilization. Our code and data are released at https://github.com/byronBBL/Context-DPO

View on arXiv
Comments on this paper