ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.02137
  4. Cited By
ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language
  Understanding and Generation

ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation

5 July 2021
Yu Sun
Shuohuan Wang
Shikun Feng
Siyu Ding
Chao Pang
Junyuan Shang
Jiaxiang Liu
Xuyi Chen
Yanbin Zhao
Yuxiang Lu
Weixin Liu
Zhihua Wu
Weibao Gong
Jianzhong Liang
Zhizhou Shang
Peng Sun
Wei Liu
Ouyang Xuan
Dianhai Yu
Hao Tian
Hua-Hong Wu
Haifeng Wang
ArXivPDFHTML

Papers citing "ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation"

15 / 65 papers shown
Title
Diaformer: Automatic Diagnosis via Symptoms Sequence Generation
Diaformer: Automatic Diagnosis via Symptoms Sequence Generation
Junying Chen
Dongfang Li
Qingcai Chen
Wenxiu Zhou
Xin Liu
MedIm
25
30
0
20 Dec 2021
Enhancing Multilingual Language Model with Massive Multilingual
  Knowledge Triples
Enhancing Multilingual Language Model with Massive Multilingual Knowledge Triples
Linlin Liu
Xin Li
Ruidan He
Lidong Bing
Shafiq R. Joty
Luo Si
KELM
35
18
0
22 Nov 2021
On Transferability of Prompt Tuning for Natural Language Processing
On Transferability of Prompt Tuning for Natural Language Processing
Yusheng Su
Xiaozhi Wang
Yujia Qin
Chi-Min Chan
Yankai Lin
...
Peng Li
Juanzi Li
Lei Hou
Maosong Sun
Jie Zhou
AAML
VLM
18
98
0
12 Nov 2021
Recent Advances in Natural Language Processing via Large Pre-Trained
  Language Models: A Survey
Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey
Bonan Min
Hayley L Ross
Elior Sulem
Amir Pouran Ben Veyseh
Thien Huu Nguyen
Oscar Sainz
Eneko Agirre
Ilana Heinz
Dan Roth
LM&MA
VLM
AI4CE
71
1,029
0
01 Nov 2021
Yuan 1.0: Large-Scale Pre-trained Language Model in Zero-Shot and
  Few-Shot Learning
Yuan 1.0: Large-Scale Pre-trained Language Model in Zero-Shot and Few-Shot Learning
Shaohua Wu
Xudong Zhao
Tong Yu
Rongguo Zhang
C. Shen
...
Feng Li
Hong Zhu
Jiangang Luo
Liang Xu
Xuanwei Zhang
ALM
18
59
0
10 Oct 2021
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language
  Understanding and Generation
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation
Yunfan Shao
Zhichao Geng
Yitao Liu
Junqi Dai
Hang Yan
Fei Yang
Li Zhe
Hujun Bao
Xipeng Qiu
MedIm
62
146
0
13 Sep 2021
MvSR-NAT: Multi-view Subset Regularization for Non-Autoregressive
  Machine Translation
MvSR-NAT: Multi-view Subset Regularization for Non-Autoregressive Machine Translation
Pan Xie
Zexian Li
Xiaohui Hu
28
11
0
19 Aug 2021
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods
  in Natural Language Processing
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu
Weizhe Yuan
Jinlan Fu
Zhengbao Jiang
Hiroaki Hayashi
Graham Neubig
VLM
SyDa
31
3,828
0
28 Jul 2021
ZEN 2.0: Continue Training and Adaption for N-gram Enhanced Text
  Encoders
ZEN 2.0: Continue Training and Adaption for N-gram Enhanced Text Encoders
Yan Song
Tong Zhang
Yonggang Wang
Kai-Fu Lee
46
43
0
04 May 2021
Zero-Shot Text-to-Image Generation
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
255
4,777
0
24 Feb 2021
ERNIE-Doc: A Retrospective Long-Document Modeling Transformer
ERNIE-Doc: A Retrospective Long-Document Modeling Transformer
Siyu Ding
Junyuan Shang
Shuohuan Wang
Yu Sun
Hao Tian
Hua-Hong Wu
Haifeng Wang
60
52
0
31 Dec 2020
Improving Machine Reading Comprehension with Contextualized Commonsense
  Knowledge
Improving Machine Reading Comprehension with Contextualized Commonsense Knowledge
Kai Sun
Dian Yu
Jianshu Chen
Dong Yu
Claire Cardie
25
12
0
12 Sep 2020
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
231
4,460
0
23 Jan 2020
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
245
1,817
0
17 Sep 2019
Knowledge Enhanced Contextual Word Representations
Knowledge Enhanced Contextual Word Representations
Matthew E. Peters
Mark Neumann
IV RobertL.Logan
Roy Schwartz
Vidur Joshi
Sameer Singh
Noah A. Smith
228
656
0
09 Sep 2019
Previous
12