ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.06040
  4. Cited By
Improving Self-supervised Pre-training via a Fully-Explored Masked
  Language Model

Improving Self-supervised Pre-training via a Fully-Explored Masked Language Model

12 October 2020
Ming Zheng
Dinghan Shen
Yelong Shen
Weizhu Chen
Lin Xiao
    SSL
ArXivPDFHTML

Papers citing "Improving Self-supervised Pre-training via a Fully-Explored Masked Language Model"

3 / 3 papers shown
Title
Do not Mask Randomly: Effective Domain-adaptive Pre-training by Masking
  In-domain Keywords
Do not Mask Randomly: Effective Domain-adaptive Pre-training by Masking In-domain Keywords
Shahriar Golchin
Mihai Surdeanu
N. Tavabi
A. Kiapour
13
4
0
14 Jul 2023
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
76
736
0
19 Mar 2014
1