38
1

ATEB: Evaluating and Improving Advanced NLP Tasks for Text Embedding Models

Abstract

Traditional text embedding benchmarks primarily evaluate embedding models' capabilities to capture semantic similarity. However, more advanced NLP tasks require a deeper understanding of text, such as safety and factuality. These tasks demand an ability to comprehend and process complex information, often involving the handling of sensitive content, or the verification of factual statements against reliable sources. We introduce a new benchmark designed to assess and highlight the limitations of embedding models trained on existing information retrieval data mixtures on advanced capabilities, which include factuality, safety, instruction following, reasoning and document-level understanding. This benchmark includes a diverse set of tasks that simulate real-world scenarios where these capabilities are critical and leads to identification of the gaps of the currently advanced embedding models. Furthermore, we propose a novel method that reformulates these various tasks as retrieval tasks. By framing tasks like safety or factuality classification as retrieval problems, we leverage the strengths of retrieval models in capturing semantic relationships while also pushing them to develop a deeper understanding of context and content. Using this approach with single-task fine-tuning, we achieved performance gains of 8\% on factuality classification and 13\% on safety classification. Our code and data will be publicly available.

View on arXiv
@article{han2025_2502.16766,
  title={ ATEB: Evaluating and Improving Advanced NLP Tasks for Text Embedding Models },
  author={ Simeng Han and Frank Palma Gomez and Tu Vu and Zefei Li and Daniel Cer and Hansi Zeng and Chris Tar and Arman Cohan and Gustavo Hernandez Abrego },
  journal={arXiv preprint arXiv:2502.16766},
  year={ 2025 }
}
Comments on this paper