ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.16312
  4. Cited By

SynDL: A Large-Scale Synthetic Test Collection for Passage Retrieval

The Web Conference (WWW), 2024
28 January 2025
Hossein A. Rahmani
Xi Wang
Emine Yilmaz
Nick Craswell
Bhaskar Mitra
Paul Thomas
ArXiv (abs)PDFHTML

Papers citing "SynDL: A Large-Scale Synthetic Test Collection for Passage Retrieval"

12 / 12 papers shown
Title
Topic-Specific Classifiers are Better Relevance Judges than Prompted LLMs
Topic-Specific Classifiers are Better Relevance Judges than Prompted LLMs
Lukas Gienapp
Martin Potthast
Harrisen Scells
Eugene Yang
120
0
0
06 Oct 2025
Towards Understanding Bias in Synthetic Data for Evaluation
Towards Understanding Bias in Synthetic Data for Evaluation
Hossein A. Rahmani
Varsha Ramineni
Nick Craswell
Bhaskar Mitra
Emine Yilmaz
285
2
0
12 Jun 2025
DisastIR: A Comprehensive Information Retrieval Benchmark for Disaster Management
DisastIR: A Comprehensive Information Retrieval Benchmark for Disaster Management
Kai Yin
Xiangjue Dong
Chengkai Liu
Lipai Huang
Yiming Xiao
Zhewei Liu
Ali Mostafavi
James Caverlee
276
2
0
20 May 2025
The Viability of Crowdsourcing for RAG Evaluation
The Viability of Crowdsourcing for RAG EvaluationAnnual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2025
Lukas Gienapp
Tim Hagen
Maik Fröbe
Matthias Hagen
Benno Stein
Martin Potthast
Harrisen Scells
371
2
0
22 Apr 2025
Improving the Reusability of Conversational Search Test CollectionsEuropean Conference on Information Retrieval (ECIR), 2025
Zahra Abbasiantaeb
Chuan Meng
Leif Azzopardi
Mohammad Aliannejadi
KELM
256
3
0
12 Mar 2025
Synthetic Test Collections for Retrieval Evaluation
Synthetic Test Collections for Retrieval EvaluationAnnual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2024
Hossein A. Rahmani
Nick Craswell
Emine Yilmaz
Bhaskar Mitra
Daniel Fernando Campos
167
30
0
13 May 2024
Large language models can accurately predict searcher preferences
Large language models can accurately predict searcher preferencesAnnual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2023
Paul Thomas
S. Spielman
Nick Craswell
Bhaskar Mitra
ALMLRM
336
214
0
19 Sep 2023
G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment
G-Eval: NLG Evaluation using GPT-4 with Better Human AlignmentConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Yang Liu
Dan Iter
Yichong Xu
Shuohang Wang
Ruochen Xu
Chenguang Zhu
ELMALMLM&MA
481
1,712
0
29 Mar 2023
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information
  Retrieval Models
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
Nandan Thakur
Nils Reimers
Andreas Rucklé
Abhishek Srivastava
Iryna Gurevych
VLM
1.2K
1,345
0
17 Apr 2021
Simplified Data Wrangling with ir_datasets
Simplified Data Wrangling with ir_datasetsAnnual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2021
Sean MacAvaney
Andrew Yates
Sergey Feldman
Doug Downey
Arman Cohan
Nazli Goharian
240
127
0
03 Mar 2021
Overview of the TREC 2019 deep learning track
Overview of the TREC 2019 deep learning track
Nick Craswell
Bhaskar Mitra
Emine Yilmaz
Daniel Fernando Campos
E. Voorhees
451
599
0
17 Mar 2020
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
Payal Bajaj
Daniel Fernando Campos
Nick Craswell
Li Deng
Jianfeng Gao
...
Mir Rosenberg
Xia Song
Alina Stoica
Saurabh Tiwary
Tong Wang
RALM
694
3,118
0
28 Nov 2016
1