ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.08101
  4. Cited By
Pre-Training with Whole Word Masking for Chinese BERT

Pre-Training with Whole Word Masking for Chinese BERT

19 June 2019
Yiming Cui
Wanxiang Che
Ting Liu
Bing Qin
Ziqing Yang
ArXivPDFHTML

Papers citing "Pre-Training with Whole Word Masking for Chinese BERT"

4 / 4 papers shown
Title
RCLMuFN: Relational Context Learning and Multiplex Fusion Network for
  Multimodal Sarcasm Detection
RCLMuFN: Relational Context Learning and Multiplex Fusion Network for Multimodal Sarcasm Detection
Tongguan Wang
Junkai Li
Guixin Su
Yongcheng Zhang
Dongyu Su
Yuxue Hu
Ying Sha
106
2
0
17 Dec 2024
QUERT: Continual Pre-training of Language Model for Query Understanding
  in Travel Domain Search
QUERT: Continual Pre-training of Language Model for Query Understanding in Travel Domain Search
Jian Xie
Yidan Liang
Jingping Liu
Yanghua Xiao
Baohua Wu
Shenghua Ni
VLM
LRM
27
8
0
11 Jun 2023
A Mutual Information Maximization Perspective of Language Representation
  Learning
A Mutual Information Maximization Perspective of Language Representation Learning
Lingpeng Kong
Cyprien de Masson dÁutume
Wang Ling
Lei Yu
Zihang Dai
Dani Yogatama
SSL
212
165
0
18 Oct 2019
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,743
0
26 Sep 2016
1