ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.09733
  4. Cited By
Train No Evil: Selective Masking for Task-Guided Pre-Training

Train No Evil: Selective Masking for Task-Guided Pre-Training

21 April 2020
Yuxian Gu
Zhengyan Zhang
Xiaozhi Wang
Zhiyuan Liu
Maosong Sun
ArXivPDFHTML

Papers citing "Train No Evil: Selective Masking for Task-Guided Pre-Training"

11 / 11 papers shown
Title
CPRM: A LLM-based Continual Pre-training Framework for Relevance Modeling in Commercial Search
CPRM: A LLM-based Continual Pre-training Framework for Relevance Modeling in Commercial Search
Kaixin Wu
Yixin Ji
Z. Chen
Qiang Wang
Cunxiang Wang
...
Jia Xu
Zhongyi Liu
Jinjie Gu
Yuan Zhou
Linjian Mo
KELM
CLL
92
0
0
02 Dec 2024
How Useful is Continued Pre-Training for Generative Unsupervised Domain Adaptation?
How Useful is Continued Pre-Training for Generative Unsupervised Domain Adaptation?
Rheeya Uppaal
Yixuan Li
Junjie Hu
32
4
0
31 Jan 2024
An Anchor Learning Approach for Citation Field Learning
An Anchor Learning Approach for Citation Field Learning
Zilin Yuan
Borun Chen
Yimeng Dai
Yinghui Li
Hai-Tao Zheng
Rui Zhang
33
0
0
07 Sep 2023
Do not Mask Randomly: Effective Domain-adaptive Pre-training by Masking
  In-domain Keywords
Do not Mask Randomly: Effective Domain-adaptive Pre-training by Masking In-domain Keywords
Shahriar Golchin
Mihai Surdeanu
N. Tavabi
A. Kiapour
13
4
0
14 Jul 2023
Difference-Masking: Choosing What to Mask in Continued Pretraining
Difference-Masking: Choosing What to Mask in Continued Pretraining
Alex Wilf
Syeda Nahida Akter
Leena Mathur
Paul Pu Liang
Sheryl Mathew
Mengrou Shou
Eric Nyberg
Louis-Philippe Morency
CLL
SSL
24
4
0
23 May 2023
Teaching the Pre-trained Model to Generate Simple Texts for Text
  Simplification
Teaching the Pre-trained Model to Generate Simple Texts for Text Simplification
Renliang Sun
Wei-ping Xu
Xiaojun Wan
CLL
14
16
0
21 May 2023
Using Selective Masking as a Bridge between Pre-training and Fine-tuning
Using Selective Masking as a Bridge between Pre-training and Fine-tuning
Tanish Lad
Himanshu Maheshwari
Shreyas Kottukkal
R. Mamidi
11
3
0
24 Nov 2022
Knowledgeable Salient Span Mask for Enhancing Language Models as
  Knowledge Base
Knowledgeable Salient Span Mask for Enhancing Language Models as Knowledge Base
Cunxiang Wang
Fuli Luo
Yanyang Li
Runxin Xu
Fei Huang
Yue Zhang
KELM
16
2
0
17 Apr 2022
MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided
  Multimodal Attention for Textbook Question Answering
MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Fangzhi Xu
Qika Lin
J. Liu
Lingling Zhang
Tianzhe Zhao
Qianyi Chai
Yudai Pan
9
2
0
06 Dec 2021
Pre-Trained Models: Past, Present and Future
Pre-Trained Models: Past, Present and Future
Xu Han
Zhengyan Zhang
Ning Ding
Yuxian Gu
Xiao Liu
...
Jie Tang
Ji-Rong Wen
Jinhui Yuan
Wayne Xin Zhao
Jun Zhu
AIFin
MQ
AI4MH
24
811
0
14 Jun 2021
Studying Strategically: Learning to Mask for Closed-book QA
Studying Strategically: Learning to Mask for Closed-book QA
Qinyuan Ye
Belinda Z. Li
Sinong Wang
Benjamin Bolte
Hao Ma
Wen-tau Yih
Xiang Ren
Madian Khabsa
OffRL
17
11
0
31 Dec 2020
1