ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.18418
101
4

Rank1: Test-Time Compute for Reranking in Information Retrieval

25 February 2025
Orion Weller
Kathryn Ricci
Eugene Yang
Andrew Yates
Dawn J Lawrie
Benjamin Van Durme
    ReLM
    AI4TS
    LRM
ArXivPDFHTML
Abstract

We introduce Rank1, the first reranking model trained to take advantage of test-time compute. Rank1 demonstrates the applicability within retrieval of using a reasoning language model (i.e. OpenAI's o1, Deepseek's R1, etc.) for distillation in order to rapidly improve the performance of a smaller model. We gather and open-source a dataset of more than 600,000 examples of R1 reasoning traces from queries and passages in MS MARCO. Models trained on this dataset show: (1) state-of-the-art performance on advanced reasoning and instruction following datasets; (2) work remarkably well out of distribution due to the ability to respond to user-input prompts; and (3) have explainable reasoning chains that can be given to users or RAG-based systems. Further, we demonstrate that quantized versions of these models retain strong performance while using less compute/memory. Overall, Rank1 shows that test-time compute allows for a fundamentally new type of explainable and performant reranker model for search.

View on arXiv
@article{weller2025_2502.18418,
  title={ Rank1: Test-Time Compute for Reranking in Information Retrieval },
  author={ Orion Weller and Kathryn Ricci and Eugene Yang and Andrew Yates and Dawn Lawrie and Benjamin Van Durme },
  journal={arXiv preprint arXiv:2502.18418},
  year={ 2025 }
}
Comments on this paper