Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2311.01555
Cited By
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
2 November 2023
Weiwei Sun
Zheng Chen
Xinyu Ma
Lingyong Yan
Shuaiqiang Wang
Pengjie Ren
Zhumin Chen
Dawei Yin
Zhaochun Ren
ALM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers"
8 / 8 papers shown
Title
RankFlow: A Multi-Role Collaborative Reranking Workflow Utilizing Large Language Models
Can Jin
Hongwu Peng
Anxiang Zhang
Nuo Chen
Jiahui Zhao
...
K. Li
Shuya Feng
Kai Zhong
Caiwen Ding
Dimitris N. Metaxas
99
2
0
02 Feb 2025
TourRank: Utilizing Large Language Models for Documents Ranking with a Tournament-Inspired Strategy
Yiqun Chen
Qi Liu
Yi Zhang
Weiwei Sun
Daiting Shi
Jiaxin Mao
Dawei Yin
Jiaxin Mao
Dawei Yin
33
6
0
17 Jun 2024
Towards Completeness-Oriented Tool Retrieval for Large Language Models
Changle Qu
Sunhao Dai
Xiaochi Wei
Hengyi Cai
Shuaiqiang Wang
Dawei Yin
Jun Xu
Jirong Wen
KELM
22
7
0
25 May 2024
Zero-Shot Listwise Document Reranking with a Large Language Model
Xueguang Ma
Xinyu Crystina Zhang
Ronak Pradeep
Jimmy J. Lin
65
50
0
03 May 2023
Generate rather than Retrieve: Large Language Models are Strong Context Generators
W. Yu
Dan Iter
Shuohang Wang
Yichong Xu
Mingxuan Ju
Soumya Sanyal
Chenguang Zhu
Michael Zeng
Meng-Long Jiang
RALM
AIMat
221
321
0
21 Sep 2022
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
Nandan Thakur
Nils Reimers
Andreas Rucklé
Abhishek Srivastava
Iryna Gurevych
VLM
229
961
0
17 Apr 2021
Overview of the TREC 2020 deep learning track
Nick Craswell
Bhaskar Mitra
Emine Yilmaz
Daniel Fernando Campos
54
368
0
15 Feb 2021
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
243
1,815
0
17 Sep 2019
1