39
0

CASE -- Condition-Aware Sentence Embeddings for Conditional Semantic Textual Similarity Measurement

Abstract

The meaning conveyed by a sentence often depends on the context in which it appears. Despite the progress of sentence embedding methods, it remains unclear how to best modify a sentence embedding conditioned on its context. To address this problem, we propose Condition-Aware Sentence Embeddings (CASE), an efficient and accurate method to create an embedding for a sentence under a given condition. First, CASE creates an embedding for the condition using a Large Language Model (LLM), where the sentence influences the attention scores computed for the tokens in the condition during pooling. Next, a supervised nonlinear projection is learned to reduce the dimensionality of the LLM-based text embeddings. We show that CASE significantly outperforms previously proposed Conditional Semantic Textual Similarity (C-STS) methods on an existing standard benchmark dataset. We find that subtracting the condition embedding consistently improves the C-STS performance of LLM-based text embeddings. Moreover, we propose a supervised dimensionality reduction method that not only reduces the dimensionality of LLM-based embeddings but also significantly improves their performance.

View on arXiv
@article{zhang2025_2503.17279,
  title={ CASE -- Condition-Aware Sentence Embeddings for Conditional Semantic Textual Similarity Measurement },
  author={ Gaifan Zhang and Yi Zhou and Danushka Bollegala },
  journal={arXiv preprint arXiv:2503.17279},
  year={ 2025 }
}
Comments on this paper