154
v1v2 (latest)

Emotional RAG LLMs: Reading Comprehension for the Open Internet

Main:8 Pages
21 Figures
Bibliography:3 Pages
9 Tables
Appendix:8 Pages
Abstract

Queries to large language models (LLMs) can be divided into two parts: the instruction/question and the accompanying context. The context for retrieval-augmented generation (RAG) systems in most benchmarks comes from Wikipedia-like texts written in a neutral and factual tone. However, real-world RAG applications often retrieve internet-based text with diverse tones and linguistic styles, posing challenges for downstream tasks. This paper introduces (a) a dataset that transforms RAG-retrieved passages into emotionally inflected and sarcastic text, (b) an emotion translation model for adapting text to different tones, and (c) a prompt-based method to improve LLMs' pragmatic interpretation of retrieved text.

View on arXiv
Comments on this paper