ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.04201
  4. Cited By
REPT: Bridging Language Models and Machine Reading Comprehension via
  Retrieval-Based Pre-training

REPT: Bridging Language Models and Machine Reading Comprehension via Retrieval-Based Pre-training

10 May 2021
Fangkai Jiao
Yangyang Guo
Yilin Niu
Feng Ji
Feng-Lin Li
Liqiang Nie
    LRM
ArXivPDFHTML

Papers citing "REPT: Bridging Language Models and Machine Reading Comprehension via Retrieval-Based Pre-training"

8 / 8 papers shown
Title
Machine Reading Comprehension using Case-based Reasoning
Machine Reading Comprehension using Case-based Reasoning
Dung Ngoc Thai
Dhruv Agarwal
Mudit Chaudhary
Wenlong Zhao
Rajarshi Das
Manzil Zaheer
J. Lee
Hannaneh Hajishirzi
Andrew McCallum
15
1
0
24 May 2023
Exploring Self-supervised Logic-enhanced Training for Large Language
  Models
Exploring Self-supervised Logic-enhanced Training for Large Language Models
Fangkai Jiao
Zhiyang Teng
Bosheng Ding
Zhengyuan Liu
Nancy F. Chen
Shafiq R. Joty
ReLM
LRM
19
4
0
23 May 2023
SpanDrop: Simple and Effective Counterfactual Learning for Long
  Sequences
SpanDrop: Simple and Effective Counterfactual Learning for Long Sequences
Peng Qi
Guangtao Wang
Jing Huang
16
0
0
03 Aug 2022
A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA
A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA
Yangyang Guo
Liqiang Nie
Yongkang Wong
Y. Liu
Zhiyong Cheng
Mohan S. Kankanhalli
69
39
0
30 Jun 2022
Bridging the Gap between Language Models and Cross-Lingual Sequence
  Labeling
Bridging the Gap between Language Models and Cross-Lingual Sequence Labeling
Nuo Chen
Linjun Shou
Ming Gong
Jian Pei
Daxin Jiang
19
10
0
11 Apr 2022
MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning
MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning
Fangkai Jiao
Yangyang Guo
Xuemeng Song
Liqiang Nie
LRM
31
35
0
01 Mar 2022
Rethinking embedding coupling in pre-trained language models
Rethinking embedding coupling in pre-trained language models
Hyung Won Chung
Thibault Févry
Henry Tsai
Melvin Johnson
Sebastian Ruder
93
142
0
24 Oct 2020
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
241
1,450
0
18 Mar 2020
1