ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.09223
  4. Cited By
ERNIE: Enhanced Representation through Knowledge Integration

ERNIE: Enhanced Representation through Knowledge Integration

19 April 2019
Yu Sun
Shuohuan Wang
Yukun Li
Shikun Feng
Xuyi Chen
Han Zhang
Xin Tian
Danxiang Zhu
Hao Tian
Hua-Hong Wu
ArXivPDFHTML

Papers citing "ERNIE: Enhanced Representation through Knowledge Integration"

50 / 359 papers shown
Title
LERT: A Linguistically-motivated Pre-trained Language Model
LERT: A Linguistically-motivated Pre-trained Language Model
Yiming Cui
Wanxiang Che
Shijin Wang
Ting Liu
23
24
0
10 Nov 2022
CLOP: Video-and-Language Pre-Training with Knowledge Regularizations
CLOP: Video-and-Language Pre-Training with Knowledge Regularizations
Guohao Li
Hu Yang
Feng He
Zhifan Feng
Yajuan Lyu
Hua-Hong Wu
Haifeng Wang
VLM
19
1
0
07 Nov 2022
Tri-Attention: Explicit Context-Aware Attention Mechanism for Natural
  Language Processing
Tri-Attention: Explicit Context-Aware Attention Mechanism for Natural Language Processing
Rui Yu
Yifeng Li
Wenpeng Lu
LongBing Cao
22
1
0
05 Nov 2022
Evaluation of Automated Speech Recognition Systems for Conversational
  Speech: A Linguistic Perspective
Evaluation of Automated Speech Recognition Systems for Conversational Speech: A Linguistic Perspective
H. Pasandi
Haniyeh B. Pasandi
16
1
0
05 Nov 2022
VarMAE: Pre-training of Variational Masked Autoencoder for
  Domain-adaptive Language Understanding
VarMAE: Pre-training of Variational Masked Autoencoder for Domain-adaptive Language Understanding
Dou Hu
Xiaolong Hou
Xiyang Du
Mengyuan Zhou
Lian-Xin Jiang
Yang Mo
Xiaofeng Shi
25
12
0
01 Nov 2022
WHEN FLUE MEETS FLANG: Benchmarks and Large Pre-trained Language Model
  for Financial Domain
WHEN FLUE MEETS FLANG: Benchmarks and Large Pre-trained Language Model for Financial Domain
Raj Sanjay Shah
Kunal Chawla
Dheeraj Eidnani
Agam Shah
Wendi Du
S. Chava
Natraj Raman
Charese Smiley
Jiaao Chen
Diyi Yang
AIFin
24
103
0
31 Oct 2022
Exploiting prompt learning with pre-trained language models for
  Alzheimer's Disease detection
Exploiting prompt learning with pre-trained language models for Alzheimer's Disease detection
Yi Wang
Jiajun Deng
Tianzi Wang
Bo Zheng
Shoukang Hu
Xunying Liu
Helen M. Meng
38
14
0
29 Oct 2022
Unsupervised Boundary-Aware Language Model Pretraining for Chinese
  Sequence Labeling
Unsupervised Boundary-Aware Language Model Pretraining for Chinese Sequence Labeling
Peijie Jiang
Dingkun Long
Yanzhao Zhang
Pengjun Xie
Meishan Zhang
M. Zhang
SSL
20
12
0
27 Oct 2022
EntityCS: Improving Zero-Shot Cross-lingual Transfer with Entity-Centric
  Code Switching
EntityCS: Improving Zero-Shot Cross-lingual Transfer with Entity-Centric Code Switching
Chenxi Whitehouse
Fenia Christopoulou
Ignacio Iacobacci
29
9
0
22 Oct 2022
Spectrum-BERT: Pre-training of Deep Bidirectional Transformers for
  Spectral Classification of Chinese Liquors
Spectrum-BERT: Pre-training of Deep Bidirectional Transformers for Spectral Classification of Chinese Liquors
Yansong Wang
Yundong Sun
Yan-Jiao Fu
Dongjie Zhu
Zhaoshuo Tian
13
6
0
22 Oct 2022
Enhancing Tabular Reasoning with Pattern Exploiting Training
Enhancing Tabular Reasoning with Pattern Exploiting Training
Abhilash Shankarampeta
Vivek Gupta
Shuo Zhang
LMTD
RALM
ReLM
58
6
0
21 Oct 2022
InforMask: Unsupervised Informative Masking for Language Model
  Pretraining
InforMask: Unsupervised Informative Masking for Language Model Pretraining
Nafis Sadeq
Canwen Xu
Julian McAuley
19
13
0
21 Oct 2022
SLING: Sino Linguistic Evaluation of Large Language Models
SLING: Sino Linguistic Evaluation of Large Language Models
Yixiao Song
Kalpesh Krishna
R. Bhatt
Mohit Iyyer
8
8
0
21 Oct 2022
Tele-Knowledge Pre-training for Fault Analysis
Tele-Knowledge Pre-training for Fault Analysis
Zhuo Chen
Wen Zhang
Yufen Huang
Mingyang Chen
Yuxia Geng
...
Song Jiang
Zhaoyang Lian
Y. Li
Lei Cheng
Hua-zeng Chen
41
17
0
20 Oct 2022
Pre-training Language Models with Deterministic Factual Knowledge
Pre-training Language Models with Deterministic Factual Knowledge
Shaobo Li
Xiaoguang Li
Lifeng Shang
Chengjie Sun
Bingquan Liu
Zhenzhou Ji
Xin Jiang
Qun Liu
KELM
36
11
0
20 Oct 2022
Hidden State Variability of Pretrained Language Models Can Guide
  Computation Reduction for Transfer Learning
Hidden State Variability of Pretrained Language Models Can Guide Computation Reduction for Transfer Learning
Shuo Xie
Jiahao Qiu
Ankita Pasad
Li Du
Qing Qu
Hongyuan Mei
30
16
0
18 Oct 2022
Knowledge Prompting in Pre-trained Language Model for Natural Language
  Understanding
Knowledge Prompting in Pre-trained Language Model for Natural Language Understanding
J. Wang
Wenkang Huang
Qiuhui Shi
Hongbin Wang
Minghui Qiu
Xiang Li
Ming Gao
KELM
VLM
22
17
0
16 Oct 2022
CDConv: A Benchmark for Contradiction Detection in Chinese Conversations
CDConv: A Benchmark for Contradiction Detection in Chinese Conversations
Chujie Zheng
Jinfeng Zhou
Yinhe Zheng
Libiao Peng
Zhen Guo
Wenquan Wu
Zhengyu Niu
Hua-Hong Wu
Minlie Huang
16
13
0
16 Oct 2022
ERNIE-Layout: Layout Knowledge Enhanced Pre-training for Visually-rich
  Document Understanding
ERNIE-Layout: Layout Knowledge Enhanced Pre-training for Visually-rich Document Understanding
Qiming Peng
Yinxu Pan
Wenjin Wang
Bin Luo
Zhenyu Zhang
...
Shi Feng
Yu Sun
Hao Tian
Hua-Hong Wu
Haifeng Wang
6
83
0
12 Oct 2022
ERNIE-ViL 2.0: Multi-view Contrastive Learning for Image-Text
  Pre-training
ERNIE-ViL 2.0: Multi-view Contrastive Learning for Image-Text Pre-training
Bin Shan
Weichong Yin
Yu Sun
Hao Tian
Hua-Hong Wu
Haifeng Wang
VLM
22
19
0
30 Sep 2022
WeLM: A Well-Read Pre-trained Language Model for Chinese
WeLM: A Well-Read Pre-trained Language Model for Chinese
Hui Su
Xiao Zhou
Houjin Yu
Xiaoyu Shen
Yuwen Chen
Zilin Zhu
Yang Yu
Jie Zhou
24
23
0
21 Sep 2022
A Comprehensive Survey on Trustworthy Recommender Systems
A Comprehensive Survey on Trustworthy Recommender Systems
Wenqi Fan
Xiangyu Zhao
Xiao Chen
Jingran Su
Jingtong Gao
...
Qidong Liu
Yiqi Wang
Hanfeng Xu
Lei Chen
Qing Li
FaML
35
46
0
21 Sep 2022
Physical Logic Enhanced Network for Small-Sample Bi-Layer Metallic Tubes
  Bending Springback Prediction
Physical Logic Enhanced Network for Small-Sample Bi-Layer Metallic Tubes Bending Springback Prediction
Chang-Hai Sun
Zili Wang
Shuyou Zhang
Le Wang
Jianrong Tan
AI4CE
9
2
0
20 Sep 2022
Align, Reason and Learn: Enhancing Medical Vision-and-Language
  Pre-training with Knowledge
Align, Reason and Learn: Enhancing Medical Vision-and-Language Pre-training with Knowledge
Zhihong Chen
Guanbin Li
Xiang Wan
119
65
0
15 Sep 2022
SPACE-2: Tree-Structured Semi-Supervised Contrastive Pre-training for
  Task-Oriented Dialog Understanding
SPACE-2: Tree-Structured Semi-Supervised Contrastive Pre-training for Task-Oriented Dialog Understanding
Wanwei He
Yinpei Dai
Binyuan Hui
Min Yang
Zhen Cao
Jianbo Dong
Fei Huang
Luo Si
Yongbin Li
VLM
32
31
0
14 Sep 2022
PainPoints: A Framework for Language-based Detection of Chronic Pain and
  Expert-Collaborative Text-Summarization
PainPoints: A Framework for Language-based Detection of Chronic Pain and Expert-Collaborative Text-Summarization
S. Fadnavis
Amit Dhurandhar
R. Norel
Jenna M. Reinen
C. Agurto
E. Secchettin
V. Schweiger
Giovanni Perini
Guillermo Cecchi
26
1
0
14 Sep 2022
CrossDial: An Entertaining Dialogue Dataset of Chinese Crosstalk
CrossDial: An Entertaining Dialogue Dataset of Chinese Crosstalk
Baizhou Huang
Shikang Du
Xiao-Yi Wan
8
0
0
03 Sep 2022
DPTDR: Deep Prompt Tuning for Dense Passage Retrieval
DPTDR: Deep Prompt Tuning for Dense Passage Retrieval
Zhen-Quan Tang
Benyou Wang
Ting Yao
VLM
29
14
0
24 Aug 2022
CLOWER: A Pre-trained Language Model with Contrastive Learning over Word
  and Character Representations
CLOWER: A Pre-trained Language Model with Contrastive Learning over Word and Character Representations
Borun Chen
Hongyin Tang
Jiahao Bu
Kai Zhang
Jingang Wang
Qifan Wang
Haitao Zheng
Wei Yu Wu
Liqian Yu
VLM
17
1
0
23 Aug 2022
Learning Better Masking for Better Language Model Pre-training
Learning Better Masking for Better Language Model Pre-training
Dongjie Yang
Zhuosheng Zhang
Hai Zhao
24
15
0
23 Aug 2022
Brand Celebrity Matching Model Based on Natural Language Processing
Brand Celebrity Matching Model Based on Natural Language Processing
Han Yang
Kejian Yang
Erhan Zhang
20
1
0
18 Aug 2022
An Interpretability Evaluation Benchmark for Pre-trained Language Models
An Interpretability Evaluation Benchmark for Pre-trained Language Models
Ya-Ming Shen
Lijie Wang
Ying Chen
Xinyan Xiao
Jing Liu
Hua-Hong Wu
27
4
0
28 Jul 2022
MLRIP: Pre-training a military language representation model with
  informative factual knowledge and professional knowledge base
MLRIP: Pre-training a military language representation model with informative factual knowledge and professional knowledge base
Hui Li
Xu Yang
Xin Zhao
Lin Yu
Jiping Zheng
Wei Sun
KELM
22
0
0
28 Jul 2022
Masked Spatial-Spectral Autoencoders Are Excellent Hyperspectral
  Defenders
Masked Spatial-Spectral Autoencoders Are Excellent Hyperspectral Defenders
Jiahao Qi
Z. Gong
Xingyue Liu
Kangcheng Bin
Chen Chen
Yongqiang Li
Wei Xue
Yu Zhang
P. Zhong
AAML
34
6
0
16 Jul 2022
Exploiting Word Semantics to Enrich Character Representations of Chinese
  Pre-trained Models
Exploiting Word Semantics to Enrich Character Representations of Chinese Pre-trained Models
Wenbiao Li
Rui Sun
Yunfang Wu
8
4
0
13 Jul 2022
Understanding Performance of Long-Document Ranking Models through
  Comprehensive Evaluation and Leaderboarding
Understanding Performance of Long-Document Ranking Models through Comprehensive Evaluation and Leaderboarding
Leonid Boytsov
David Akinpelu
Tianyi Lin
Fangwei Gao
Yutian Zhao
Jeffrey Huang
Nipun Katyal
Eric Nyberg
31
9
0
04 Jul 2022
An Understanding-Oriented Robust Machine Reading Comprehension Model
An Understanding-Oriented Robust Machine Reading Comprehension Model
Feiliang Ren
Yongkang Liu
Bochao Li
Shilei Liu
Bingchao Wang
Jiaqi Wang
Chunchao Liu
Qi Ma
19
3
0
01 Jul 2022
Entity-Graph Enhanced Cross-Modal Pretraining for Instance-level Product
  Retrieval
Entity-Graph Enhanced Cross-Modal Pretraining for Instance-level Product Retrieval
Xiao Dong
Xunlin Zhan
Yunchao Wei
Xiaoyong Wei
Yaowei Wang
Minlong Lu
Xiaochun Cao
Xiaodan Liang
19
11
0
17 Jun 2022
Towards Robust Ranker for Text Retrieval
Towards Robust Ranker for Text Retrieval
Yucheng Zhou
Tao Shen
Xiubo Geng
Chongyang Tao
Can Xu
Guodong Long
Binxing Jiao
Daxin Jiang
OOD
26
42
0
16 Jun 2022
KE-QI: A Knowledge Enhanced Article Quality Identification Dataset
KE-QI: A Knowledge Enhanced Article Quality Identification Dataset
Chunhui Ai
Derui Wang
Xuemi Yan
Yang Xu
Wenrui Xie
Ziqiang Cao
17
0
0
15 Jun 2022
A Unified Continuous Learning Framework for Multi-modal Knowledge
  Discovery and Pre-training
A Unified Continuous Learning Framework for Multi-modal Knowledge Discovery and Pre-training
Zhihao Fan
Zhongyu Wei
Jingjing Chen
Siyuan Wang
Zejun Li
Jiarong Xu
Xuanjing Huang
CLL
9
6
0
11 Jun 2022
1Cademy at Semeval-2022 Task 1: Investigating the Effectiveness of
  Multilingual, Multitask, and Language-Agnostic Tricks for the Reverse
  Dictionary Task
1Cademy at Semeval-2022 Task 1: Investigating the Effectiveness of Multilingual, Multitask, and Language-Agnostic Tricks for the Reverse Dictionary Task
Zhiyong Wang
Ge Zhang
Nineli Lashkarashvili
8
3
0
08 Jun 2022
Less Learn Shortcut: Analyzing and Mitigating Learning of Spurious
  Feature-Label Correlation
Less Learn Shortcut: Analyzing and Mitigating Learning of Spurious Feature-Label Correlation
Yanrui Du
Jing Yang
Yan Chen
Jing Liu
Sendong Zhao
Qiaoqiao She
Huaqin Wu
Haifeng Wang
Bing Qin
28
9
0
25 May 2022
RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked
  Auto-Encoder
RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked Auto-Encoder
Shitao Xiao
Zheng Liu
Yingxia Shao
Zhao Cao
RALM
118
109
0
24 May 2022
Revisiting Pre-trained Language Models and their Evaluation for Arabic
  Natural Language Understanding
Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Abbas Ghaddar
Yimeng Wu
Sunyam Bagga
Ahmad Rashid
Khalil Bibi
...
Zhefeng Wang
Baoxing Huai
Xin Jiang
Qun Liu
Philippe Langlais
22
6
0
21 May 2022
PaddleSpeech: An Easy-to-Use All-in-One Speech Toolkit
PaddleSpeech: An Easy-to-Use All-in-One Speech Toolkit
Hui Zhang
Tian Yuan
Junkun Chen
Xintong Li
Renjie Zheng
...
Zeyu Chen
Xiaoguang Hu
Dianhai Yu
Yanjun Ma
Liang Huang
AuLLM
29
24
0
20 May 2022
ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self
  On-the-fly Distillation for Dense Passage Retrieval
ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval
Yuxiang Lu
Yiding Liu
Jiaxiang Liu
Yunsheng Shi
Zhengjie Huang
...
Hao Tian
Hua-Hong Wu
Shuaiqiang Wang
Dawei Yin
Haifeng Wang
112
58
0
18 May 2022
POLITICS: Pretraining with Same-story Article Comparison for Ideology
  Prediction and Stance Detection
POLITICS: Pretraining with Same-story Article Comparison for Ideology Prediction and Stance Detection
Yujian Liu
Xinliang Frederick Zhang
David Wegsman
Nick Beauchamp
Lu Wang
30
71
0
02 May 2022
Incorporating Explicit Knowledge in Pre-trained Language Models for
  Passage Re-ranking
Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking
Qian Dong
Yiding Liu
Suqi Cheng
Shuaiqiang Wang
Zhicong Cheng
Shuzi Niu
Dawei Yin
16
24
0
25 Apr 2022
Paramixer: Parameterizing Mixing Links in Sparse Factors Works Better
  than Dot-Product Self-Attention
Paramixer: Parameterizing Mixing Links in Sparse Factors Works Better than Dot-Product Self-Attention
Tong Yu
Ruslan Khalitov
Lei Cheng
Zhirong Yang
MoE
21
10
0
22 Apr 2022
Previous
12345678
Next