ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.01969
  4. Cited By
Poly-encoders: Transformer Architectures and Pre-training Strategies for
  Fast and Accurate Multi-sentence Scoring

Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring

22 April 2019
Samuel Humeau
Kurt Shuster
Marie-Anne Lachaux
Jason Weston
ArXivPDFHTML

Papers citing "Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring"

10 / 60 papers shown
Title
What Changes Can Large-scale Language Models Bring? Intensive Study on
  HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
Boseop Kim
Hyoungseok Kim
Sang-Woo Lee
Gichang Lee
Donghyun Kwak
...
Jaewook Kang
Inho Kang
Jung-Woo Ha
W. Park
Nako Sung
VLM
243
121
0
10 Sep 2021
Sequential Attention Module for Natural Language Processing
Sequential Attention Module for Natural Language Processing
Mengyuan Zhou
Jian Ma
Haiqing Yang
Lian-Xin Jiang
Yang Mo
AI4TS
13
2
0
07 Sep 2021
Synthesizing Adversarial Negative Responses for Robust Response Ranking
  and Evaluation
Synthesizing Adversarial Negative Responses for Robust Response Ranking and Evaluation
Prakhar Gupta
Yulia Tsvetkov
Jeffrey P. Bigham
34
22
0
10 Jun 2021
Retrieve Fast, Rerank Smart: Cooperative and Joint Approaches for
  Improved Cross-Modal Retrieval
Retrieve Fast, Rerank Smart: Cooperative and Joint Approaches for Improved Cross-Modal Retrieval
Gregor Geigle
Jonas Pfeiffer
Nils Reimers
Ivan Vulić
Iryna Gurevych
27
59
0
22 Mar 2021
Learning Dense Representations of Phrases at Scale
Learning Dense Representations of Phrases at Scale
Jinhyuk Lee
Mujeen Sung
Jaewoo Kang
Danqi Chen
RALM
DML
NAI
11
115
0
23 Dec 2020
COUGH: A Challenge Dataset and Models for COVID-19 FAQ Retrieval
COUGH: A Challenge Dataset and Models for COVID-19 FAQ Retrieval
Xinliang Frederick Zhang
Heming Sun
Xiang Yue
Simon M. Lin
Huan Sun
RALM
68
17
0
24 Oct 2020
Distilling Dense Representations for Ranking using Tightly-Coupled
  Teachers
Distilling Dense Representations for Ranking using Tightly-Coupled Teachers
Sheng-Chieh Lin
Jheng-Hong Yang
Jimmy J. Lin
21
118
0
22 Oct 2020
Pretrained Transformers for Text Ranking: BERT and Beyond
Pretrained Transformers for Text Ranking: BERT and Beyond
Jimmy J. Lin
Rodrigo Nogueira
Andrew Yates
VLM
219
608
0
13 Oct 2020
How to Motivate Your Dragon: Teaching Goal-Driven Agents to Speak and
  Act in Fantasy Worlds
How to Motivate Your Dragon: Teaching Goal-Driven Agents to Speak and Act in Fantasy Worlds
Prithviraj Ammanabrolu
Jack Urbanek
Margaret Li
Arthur Szlam
Tim Rocktaschel
Jason Weston
LM&Ro
16
44
0
01 Oct 2020
Response Ranking with Deep Matching Networks and External Knowledge in
  Information-seeking Conversation Systems
Response Ranking with Deep Matching Networks and External Knowledge in Information-seeking Conversation Systems
Liu Yang
Minghui Qiu
Chen Qu
J. Guo
Yongfeng Zhang
W. Bruce Croft
Jun Huang
Haiqing Chen
186
142
0
01 May 2018
Previous
12