ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.08190
  4. Cited By
Question Answering Infused Pre-training of General-Purpose
  Contextualized Representations

Question Answering Infused Pre-training of General-Purpose Contextualized Representations

15 June 2021
Robin Jia
M. Lewis
Luke Zettlemoyer
ArXivPDFHTML

Papers citing "Question Answering Infused Pre-training of General-Purpose Contextualized Representations"

7 / 7 papers shown
Title
Generative Language Models for Paragraph-Level Question Generation
Generative Language Models for Paragraph-Level Question Generation
Asahi Ushio
Fernando Alva-Manchego
Jose Camacho-Collados
ELM
11
45
0
08 Oct 2022
QA Is the New KR: Question-Answer Pairs as Knowledge Bases
QA Is the New KR: Question-Answer Pairs as Knowledge Bases
Wenhu Chen
William W. Cohen
Michiel de Jong
Nitish Gupta
Alessandro Presta
Pat Verga
John Wieting
19
7
0
01 Jul 2022
Improving In-Context Few-Shot Learning via Self-Supervised Training
Improving In-Context Few-Shot Learning via Self-Supervised Training
Mingda Chen
Jingfei Du
Ramakanth Pasunuru
Todor Mihaylov
Srini Iyer
Ves Stoyanov
Zornitsa Kozareva
SSL
AI4MH
27
63
0
03 May 2022
Recent Advances in Natural Language Processing via Large Pre-Trained
  Language Models: A Survey
Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey
Bonan Min
Hayley L Ross
Elior Sulem
Amir Pouran Ben Veyseh
Thien Huu Nguyen
Oscar Sainz
Eneko Agirre
Ilana Heinz
Dan Roth
LM&MA
VLM
AI4CE
55
1,029
0
01 Nov 2021
Domain-matched Pre-training Tasks for Dense Retrieval
Domain-matched Pre-training Tasks for Dense Retrieval
Barlas Oğuz
Kushal Lakhotia
Anchit Gupta
Patrick Lewis
Vladimir Karpukhin
...
Xilun Chen
Sebastian Riedel
Wen-tau Yih
Sonal Gupta
Yashar Mehdad
RALM
21
66
0
28 Jul 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,916
0
31 Dec 2020
Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for
  Pairwise Sentence Scoring Tasks
Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks
Nandan Thakur
Nils Reimers
Johannes Daxenberger
Iryna Gurevych
200
241
0
16 Oct 2020
1