ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.07461
  4. Cited By
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
v1v2v3 (latest)

GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding

20 April 2018
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
    ELM
ArXiv (abs)PDFHTML

Papers citing "GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding"

50 / 4,447 papers shown
Title
Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language
  Model
Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model
Wenhan Xiong
Jingfei Du
William Yang Wang
Veselin Stoyanov
SSLKELM
105
201
0
20 Dec 2019
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive
  Summarization
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
Jingqing Zhang
Yao-Min Zhao
Mohammad Saleh
Peter J. Liu
RALM3DGS
307
2,057
0
18 Dec 2019
Multilingual is not enough: BERT for Finnish
Multilingual is not enough: BERT for Finnish
Antti Virtanen
Jenna Kanerva
Rami Ilo
Jouni Luoma
Juhani Luotolahti
T. Salakoski
Filip Ginter
S. Pyysalo
88
281
0
15 Dec 2019
WaLDORf: Wasteless Language-model Distillation On Reading-comprehension
WaLDORf: Wasteless Language-model Distillation On Reading-comprehension
J. Tian
A. Kreuzer
Pai-Hung Chen
Hans-Martin Will
VLM
60
3
0
13 Dec 2019
Extending Machine Language Models toward Human-Level Language
  Understanding
Extending Machine Language Models toward Human-Level Language Understanding
James L. McClelland
Felix Hill
Maja R. Rudolph
Jason Baldridge
Hinrich Schütze
LRM
78
35
0
12 Dec 2019
FlauBERT: Unsupervised Language Model Pre-training for French
FlauBERT: Unsupervised Language Model Pre-training for French
Hang Le
Loïc Vial
Jibril Frej
Vincent Segonne
Maximin Coavoux
Benjamin Lecouteux
A. Allauzen
Benoît Crabbé
Laurent Besacier
D. Schwab
AI4CE
111
401
0
11 Dec 2019
Unsupervised Transfer Learning via BERT Neuron Selection
Unsupervised Transfer Learning via BERT Neuron Selection
M. Valipour
E. Lee
Jaime R. Jamacaro
C. Bessega
58
5
0
10 Dec 2019
Adversarial Analysis of Natural Language Inference Systems
Adversarial Analysis of Natural Language Inference Systems
Tiffany Chien
Jugal Kalita
AAML
56
12
0
07 Dec 2019
Large-scale Pretraining for Visual Dialog: A Simple State-of-the-Art
  Baseline
Large-scale Pretraining for Visual Dialog: A Simple State-of-the-Art Baseline
Vishvak Murahari
Dhruv Batra
Devi Parikh
Abhishek Das
VLM
109
117
0
05 Dec 2019
An Exploration of Data Augmentation and Sampling Techniques for
  Domain-Agnostic Question Answering
An Exploration of Data Augmentation and Sampling Techniques for Domain-Agnostic Question Answering
Shayne Longpre
Yi Lu
Zhucheng Tu
Christopher DuBois
66
70
0
04 Dec 2019
AMUSED: A Multi-Stream Vector Representation Method for Use in Natural
  Dialogue
AMUSED: A Multi-Stream Vector Representation Method for Use in Natural Dialogue
Gaurav Kumar
Rishabh Joshi
Jaspreet Singh
Promod Yenigalla
68
7
0
04 Dec 2019
TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in
  (Un-)Supervised NLP
TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP
Nils Rethmeier
V. Saxena
Isabelle Augenstein
FAtt
76
2
0
02 Dec 2019
EDA: Enriching Emotional Dialogue Acts using an Ensemble of Neural
  Annotators
EDA: Enriching Emotional Dialogue Acts using an Ensemble of Neural Annotators
Chandrakant Bothe
C. Weber
S. Magg
S. Wermter
57
10
0
02 Dec 2019
EduBERT: Pretrained Deep Language Models for Learning Analytics
EduBERT: Pretrained Deep Language Models for Learning Analytics
Benjamin Clavié
K. Gal
21
16
0
02 Dec 2019
BLiMP: The Benchmark of Linguistic Minimal Pairs for English
BLiMP: The Benchmark of Linguistic Minimal Pairs for English
Alex Warstadt
Alicia Parrish
Haokun Liu
Anhad Mohananey
Wei Peng
Sheng-Fu Wang
Samuel R. Bowman
137
496
0
02 Dec 2019
Bimodal Speech Emotion Recognition Using Pre-Trained Language Models
Bimodal Speech Emotion Recognition Using Pre-Trained Language Models
Verena Heusser
Niklas Freymuth
Stefan Constantin
A. Waibel
92
26
0
29 Nov 2019
Multimodal Machine Translation through Visuals and Speech
Multimodal Machine Translation through Visuals and Speech
U. Sulubacak
Ozan Caglayan
Stig-Arne Gronroos
Aku Rouhe
Desmond Elliott
Lucia Specia
Jörg Tiedemann
99
77
0
28 Nov 2019
Do Attention Heads in BERT Track Syntactic Dependencies?
Do Attention Heads in BERT Track Syntactic Dependencies?
Phu Mon Htut
Jason Phang
Shikha Bordia
Samuel R. Bowman
83
137
0
27 Nov 2019
Taking a Stance on Fake News: Towards Automatic Disinformation
  Assessment via Deep Bidirectional Transformer Language Models for Stance
  Detection
Taking a Stance on Fake News: Towards Automatic Disinformation Assessment via Deep Bidirectional Transformer Language Models for Stance Detection
Chris Dulhanty
Jason L. Deglint
Ibrahim Ben Daya
A. Wong
49
22
0
27 Nov 2019
Evaluating Commonsense in Pre-trained Language Models
Evaluating Commonsense in Pre-trained Language Models
Xuhui Zhou
Yue Zhang
Leyang Cui
Dandan Huang
AI4MHLRM
88
185
0
27 Nov 2019
Fine-Tuning by Curriculum Learning for Non-Autoregressive Neural Machine
  Translation
Fine-Tuning by Curriculum Learning for Non-Autoregressive Neural Machine Translation
Junliang Guo
Xu Tan
Linli Xu
Tao Qin
Enhong Chen
Tie-Yan Liu
98
85
0
20 Nov 2019
Ladder Loss for Coherent Visual-Semantic Embedding
Ladder Loss for Coherent Visual-Semantic Embedding
Mo Zhou
Zhenxing Niu
Le Wang
Zhanning Gao
Qilin Zhang
G. Hua
86
40
0
18 Nov 2019
Classification as Decoder: Trading Flexibility for Control in Medical
  Dialogue
Classification as Decoder: Trading Flexibility for Control in Medical Dialogue
Sam Shleifer
Manish Chablani
A. Kannan
Namit Katariya
X. Amatriain
BDLMedIm
30
0
0
16 Nov 2019
What do you mean, BERT? Assessing BERT as a Distributional Semantics
  Model
What do you mean, BERT? Assessing BERT as a Distributional Semantics Model
Timothee Mickus
Denis Paperno
Mathieu Constant
Kees van Deemter
77
46
0
13 Nov 2019
Mark my Word: A Sequence-to-Sequence Approach to Definition Modeling
Mark my Word: A Sequence-to-Sequence Approach to Definition Modeling
Timothee Mickus
Denis Paperno
Mathieu Constant
46
30
0
13 Nov 2019
Can a Gorilla Ride a Camel? Learning Semantic Plausibility from Text
Can a Gorilla Ride a Camel? Learning Semantic Plausibility from Text
Ian Porada
Kaheer Suleman
Jackie C.K. Cheung
79
13
0
13 Nov 2019
KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language
  Representation
KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation
Xiaozhi Wang
Tianyu Gao
Zhaocheng Zhu
Zhengyan Zhang
Zhiyuan Liu
Juan-Zi Li
Jian Tang
170
675
0
13 Nov 2019
TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer
  Sentence Selection
TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection
Siddhant Garg
Thuy Vu
Alessandro Moschitti
103
216
0
11 Nov 2019
Improving BERT Fine-tuning with Embedding Normalization
Wenxuan Zhou
Junyi Du
Xiang Ren
36
6
0
10 Nov 2019
Learning to Few-Shot Learn Across Diverse Natural Language
  Classification Tasks
Learning to Few-Shot Learn Across Diverse Natural Language Classification Tasks
Trapit Bansal
Rishikesh Jha
Andrew McCallum
SSL
94
121
0
10 Nov 2019
Syntax-Infused Transformer and BERT models for Machine Translation and
  Natural Language Understanding
Syntax-Infused Transformer and BERT models for Machine Translation and Natural Language Understanding
Dhanasekar Sundararaman
Vivek Subramanian
Guoyin Wang
Shijing Si
Dinghan Shen
Dong Wang
Lawrence Carin
64
41
0
10 Nov 2019
Distilling Knowledge Learned in BERT for Text Generation
Distilling Knowledge Learned in BERT for Text Generation
Yen-Chun Chen
Zhe Gan
Yu Cheng
Jingzhou Liu
Jingjing Liu
73
28
0
10 Nov 2019
Generalizing Natural Language Analysis through Span-relation
  Representations
Generalizing Natural Language Analysis through Span-relation Representations
Zhengbao Jiang
Wenyuan Xu
Jun Araki
Graham Neubig
81
60
0
10 Nov 2019
MKD: a Multi-Task Knowledge Distillation Approach for Pretrained
  Language Models
MKD: a Multi-Task Knowledge Distillation Approach for Pretrained Language Models
Linqing Liu
Haiquan Wang
Jimmy J. Lin
R. Socher
Caiming Xiong
65
21
0
09 Nov 2019
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language
  Models through Principled Regularized Optimization
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Haoming Jiang
Pengcheng He
Weizhu Chen
Xiaodong Liu
Jianfeng Gao
T. Zhao
135
563
0
08 Nov 2019
ERASER: A Benchmark to Evaluate Rationalized NLP Models
ERASER: A Benchmark to Evaluate Rationalized NLP Models
Jay DeYoung
Sarthak Jain
Nazneen Rajani
Eric P. Lehman
Caiming Xiong
R. Socher
Byron C. Wallace
160
640
0
08 Nov 2019
What Would Elsa Do? Freezing Layers During Transformer Fine-Tuning
What Would Elsa Do? Freezing Layers During Transformer Fine-Tuning
Jaejun Lee
Raphael Tang
Jimmy J. Lin
69
127
0
08 Nov 2019
Certified Data Removal from Machine Learning Models
Certified Data Removal from Machine Learning Models
Chuan Guo
Tom Goldstein
Awni Y. Hannun
Laurens van der Maaten
MU
145
452
0
08 Nov 2019
BERTs of a feather do not generalize together: Large variability in
  generalization across models with similar test set performance
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance
R. Thomas McCoy
Junghyun Min
Tal Linzen
137
151
0
07 Nov 2019
SentiLARE: Sentiment-Aware Language Representation Learning with
  Linguistic Knowledge
SentiLARE: Sentiment-Aware Language Representation Learning with Linguistic Knowledge
Pei Ke
Haozhe Ji
Siyang Liu
Xiaoyan Zhu
Minlie Huang
64
7
0
06 Nov 2019
Unsupervised Cross-lingual Representation Learning at Scale
Unsupervised Cross-lingual Representation Learning at Scale
Alexis Conneau
Kartikay Khandelwal
Naman Goyal
Vishrav Chaudhary
Guillaume Wenzek
Francisco Guzmán
Edouard Grave
Myle Ott
Luke Zettlemoyer
Veselin Stoyanov
230
6,616
0
05 Nov 2019
MML: Maximal Multiverse Learning for Robust Fine-Tuning of Language
  Models
MML: Maximal Multiverse Learning for Robust Fine-Tuning of Language Models
Itzik Malkiel
Lior Wolf
29
2
0
05 Nov 2019
Deepening Hidden Representations from Pre-trained Language Models
Deepening Hidden Representations from Pre-trained Language Models
Junjie Yang
Hai Zhao
24
10
0
05 Nov 2019
Human-centric Metric for Accelerating Pathology Reports Annotation
Human-centric Metric for Accelerating Pathology Reports Annotation
Ruibin Ma
Po-Hsuan Cameron Chen
Gang Li
W. Weng
Angela Lin
Krishna Gadepalli
Yuannan Cai
20
4
0
31 Oct 2019
Adversarial NLI: A New Benchmark for Natural Language Understanding
Adversarial NLI: A New Benchmark for Natural Language Understanding
Yixin Nie
Adina Williams
Emily Dinan
Joey Tianyi Zhou
Jason Weston
Douwe Kiela
183
1,013
0
31 Oct 2019
Transfer Learning from Transformers to Fake News Challenge Stance
  Detection (FNC-1) Task
Transfer Learning from Transformers to Fake News Challenge Stance Detection (FNC-1) Task
Valeriya Slovikovskaya
60
42
0
31 Oct 2019
Lsh-sampling Breaks the Computation Chicken-and-egg Loop in Adaptive
  Stochastic Gradient Estimation
Lsh-sampling Breaks the Computation Chicken-and-egg Loop in Adaptive Stochastic Gradient Estimation
Beidi Chen
Yingchen Xu
Anshumali Shrivastava
62
16
0
30 Oct 2019
Ensembling Strategies for Answering Natural Questions
Ensembling Strategies for Answering Natural Questions
Anthony Ferritto
Lin Pan
Rishav Chakravarti
Salim Roukos
Radu Florian
J. William Murdock
Avirup Sil
ELM
42
0
0
30 Oct 2019
Inducing brain-relevant bias in natural language processing models
Inducing brain-relevant bias in natural language processing models
Dan Schwartz
Mariya Toneva
Leila Wehbe
80
83
0
29 Oct 2019
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language
  Generation, Translation, and Comprehension
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
M. Lewis
Yinhan Liu
Naman Goyal
Marjan Ghazvininejad
Abdel-rahman Mohamed
Omer Levy
Veselin Stoyanov
Luke Zettlemoyer
AIMatVLM
268
10,907
0
29 Oct 2019
Previous
123...848586878889
Next