ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.03552
52
0

Can Cross Encoders Produce Useful Sentence Embeddings?

5 February 2025
Haritha Ananthakrishnan
Julian T Dolby
Harsha Kokel
Horst Samulowitz
Kavitha Srinivas
ArXivPDFHTML
Abstract

Cross encoders (CEs) are trained with sentence pairs to detect relatedness. As CEs require sentence pairs at inference, the prevailing view is that they can only be used as re-rankers in information retrieval pipelines. Dual encoders (DEs) are instead used to embed sentences, where sentence pairs are encoded by two separate encoders with shared weights at training, and a loss function that ensures the pair's embeddings lie close in vector space if the sentences are related. DEs however, require much larger datasets to train, and are less accurate than CEs. We report a curious finding that embeddings from earlier layers of CEs can in fact be used within an information retrieval pipeline. We show how to exploit CEs to distill a lighter-weight DE, with a 5.15x speedup in inference time.

View on arXiv
@article{ananthakrishnan2025_2502.03552,
  title={ Can Cross Encoders Produce Useful Sentence Embeddings? },
  author={ Haritha Ananthakrishnan and Julian Dolby and Harsha Kokel and Horst Samulowitz and Kavitha Srinivas },
  journal={arXiv preprint arXiv:2502.03552},
  year={ 2025 }
}
Comments on this paper