ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.07461
  4. Cited By
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
v1v2v3 (latest)

GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding

20 April 2018
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
    ELM
ArXiv (abs)PDFHTML

Papers citing "GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding"

50 / 4,808 papers shown
SuperGLUE: A Stickier Benchmark for General-Purpose Language
  Understanding Systems
SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding SystemsNeural Information Processing Systems (NeurIPS), 2019
Alex Jinpeng Wang
Yada Pruksachatkun
Nikita Nangia
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
670
2,610
0
02 May 2019
HELP: A Dataset for Identifying Shortcomings of Neural Models in
  Monotonicity Reasoning
HELP: A Dataset for Identifying Shortcomings of Neural Models in Monotonicity ReasoningInternational Workshop on Semantic Evaluation (SemEval), 2019
Hitomi Yanaka
K. Mineshima
D. Bekki
Kentaro Inui
Satoshi Sekine
Lasha Abzianidze
Johan Bos
156
65
0
27 Apr 2019
Probing What Different NLP Tasks Teach Machines about Function Word
  Comprehension
Probing What Different NLP Tasks Teach Machines about Function Word Comprehension
Najoung Kim
Roma Patel
Adam Poliak
Alex Jinpeng Wang
Patrick Xia
...
Alexis Ross
Tal Linzen
Benjamin Van Durme
Samuel R. Bowman
Ellie Pavlick
253
112
0
25 Apr 2019
Semantic Drift in Multilingual Representations
Semantic Drift in Multilingual Representations
Lisa Beinborn
Rochelle Choenni
271
30
0
24 Apr 2019
Exploring Unsupervised Pretraining and Sentence Structure Modelling for
  Winograd Schema Challenge
Exploring Unsupervised Pretraining and Sentence Structure Modelling for Winograd Schema Challenge
Yu-Ping Ruan
Xiao-Dan Zhu
Zhenhua Ling
Zhan Shi
Quan Liu
Si Wei
107
16
0
22 Apr 2019
NeuronBlocks: Building Your NLP DNN Models Like Playing Lego
NeuronBlocks: Building Your NLP DNN Models Like Playing Lego
Ming Gong
Linjun Shou
Wutao Lin
Zhijie Sang
Quanjia Yan
Ze Yang
Feixiang Cheng
Daxin Jiang
157
5
0
21 Apr 2019
Improving Multi-Task Deep Neural Networks via Knowledge Distillation for
  Natural Language Understanding
Improving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding
Xiaodong Liu
Pengcheng He
Weizhu Chen
Jianfeng Gao
FedML
194
189
0
20 Apr 2019
Language Models with Transformers
Language Models with Transformers
Chenguang Wang
Mu Li
Alex Smola
263
132
0
20 Apr 2019
Unifying Question Answering, Text Classification, and Regression via
  Span Extraction
Unifying Question Answering, Text Classification, and Regression via Span Extraction
N. Keskar
Bryan McCann
Caiming Xiong
R. Socher
BDL
147
21
0
19 Apr 2019
ERNIE: Enhanced Representation through Knowledge Integration
ERNIE: Enhanced Representation through Knowledge Integration
Yu Sun
Shuohuan Wang
Yukun Li
Shikun Feng
Xuyi Chen
Han Zhang
Xin Tian
Danxiang Zhu
Hao Tian
Hua Wu
321
981
0
19 Apr 2019
Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT
Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT
Shijie Wu
Mark Dredze
VLMSSeg
329
718
0
19 Apr 2019
Continual Learning for Sentence Representations Using Conceptors
Continual Learning for Sentence Representations Using Conceptors
Tianlin Liu
Lyle Ungar
João Sedoc
CLL
159
54
0
18 Apr 2019
Gating Mechanisms for Combining Character and Word-level Word
  Representations: An Empirical Study
Gating Mechanisms for Combining Character and Word-level Word Representations: An Empirical Study
Jorge A. Balazs
Y. Matsuo
AI4CENAI
148
3
0
11 Apr 2019
Deep Neural Networks Ensemble for Detecting Medication Mentions in
  Tweets
Deep Neural Networks Ensemble for Detecting Medication Mentions in Tweets
D. Weissenbacher
A. Sarker
A. Klein
K. O’Connor
Arjun Magge Ranganatha
G. Gonzalez-Hernandez
119
47
0
10 Apr 2019
AutoSeM: Automatic Task Selection and Mixing in Multi-Task Learning
AutoSeM: Automatic Task Selection and Mixing in Multi-Task Learning
Han Guo
Ramakanth Pasunuru
Joey Tianyi Zhou
236
51
0
08 Apr 2019
Evaluating Coherence in Dialogue Systems using Entailment
Evaluating Coherence in Dialogue Systems using Entailment
Nouha Dziri
Ehsan Kamalloo
K. Mathewson
Osmar Zaiane
222
102
0
06 Apr 2019
Analyzing and Interpreting Neural Networks for NLP: A Report on the
  First BlackboxNLP Workshop
Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop
Afra Alishahi
Grzegorz Chrupała
Tal Linzen
NAIMILM
203
67
0
05 Apr 2019
Recent Advances in Natural Language Inference: A Survey of Benchmarks,
  Resources, and Approaches
Recent Advances in Natural Language Inference: A Survey of Benchmarks, Resources, and Approaches
Shane Storks
Qiaozi Gao
J. Chai
461
142
0
02 Apr 2019
Interpreting Black Box Models via Hypothesis Testing
Interpreting Black Box Models via Hypothesis Testing
Collin Burns
Jesse Thomason
Wesley Tansey
FAtt
246
9
0
29 Mar 2019
Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
Raphael Tang
Yao Lu
Linqing Liu
Lili Mou
Olga Vechtomova
Jimmy J. Lin
225
447
0
28 Mar 2019
Cloze-driven Pretraining of Self-attention Networks
Cloze-driven Pretraining of Self-attention NetworksConference on Empirical Methods in Natural Language Processing (EMNLP), 2019
Alexei Baevski
Sergey Edunov
Yinhan Liu
Luke Zettlemoyer
Michael Auli
174
205
0
19 Mar 2019
To Tune or Not to Tune? Adapting Pretrained Representations to Diverse
  Tasks
To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks
Matthew E. Peters
Sebastian Ruder
Noah A. Smith
385
465
0
14 Mar 2019
Evidence Sentence Extraction for Machine Reading Comprehension
Evidence Sentence Extraction for Machine Reading Comprehension
Hai Wang
Dian Yu
Kai Sun
Jianshu Chen
Dong Yu
David A. McAllester
Dan Roth
147
57
0
23 Feb 2019
Parameter-Efficient Transfer Learning for NLP
Parameter-Efficient Transfer Learning for NLPInternational Conference on Machine Learning (ICML), 2019
N. Houlsby
A. Giurgiu
Stanislaw Jastrzebski
Bruna Morrone
Quentin de Laroussilhe
Andrea Gesmundo
Mona Attariyan
Sylvain Gelly
631
5,677
0
02 Feb 2019
Multi-Task Deep Neural Networks for Natural Language Understanding
Multi-Task Deep Neural Networks for Natural Language UnderstandingAnnual Meeting of the Association for Computational Linguistics (ACL), 2019
Xiaodong Liu
Pengcheng He
Weizhu Chen
Jianfeng Gao
AI4CE
309
1,328
0
31 Jan 2019
Learning and Evaluating General Linguistic Intelligence
Learning and Evaluating General Linguistic Intelligence
Dani Yogatama
Cyprien de Masson dÁutume
Jerome T. Connor
Tomás Kociský
Mike Chrzanowski
...
Angeliki Lazaridou
Wang Ling
Lei Yu
Chris Dyer
Phil Blunsom
ELMAI4CE
359
217
0
31 Jan 2019
No Training Required: Exploring Random Encoders for Sentence
  Classification
No Training Required: Exploring Random Encoders for Sentence ClassificationInternational Conference on Learning Representations (ICLR), 2019
John Wieting
Douwe Kiela
238
102
0
29 Jan 2019
Cross-lingual Language Model Pretraining
Cross-lingual Language Model Pretraining
Guillaume Lample
Alexis Conneau
1.2K
2,900
0
22 Jan 2019
Sentence transition matrix: An efficient approach that preserves
  sentence semantics
Sentence transition matrix: An efficient approach that preserves sentence semantics
Myeongjun Jang
Pilsung Kang
94
3
0
16 Jan 2019
Linguistic Analysis of Pretrained Sentence Encoders with Acceptability
  Judgments
Linguistic Analysis of Pretrained Sentence Encoders with Acceptability Judgments
Alex Warstadt
Samuel R. Bowman
245
24
0
11 Jan 2019
Can You Tell Me How to Get Past Sesame Street? Sentence-Level
  Pretraining Beyond Language Modeling
Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling
Alex Jinpeng Wang
Jan Hula
Patrick Xia
R. Pappagari
R. Thomas McCoy
...
Berlin Chen
Benjamin Van Durme
Edouard Grave
Ellie Pavlick
Samuel R. Bowman
LRMVLM
202
28
0
28 Dec 2018
Adversarial Attack and Defense on Graph Data: A Survey
Adversarial Attack and Defense on Graph Data: A Survey
Lichao Sun
Yingtong Dou
Carl Yang
Ji Wang
Yixin Liu
Philip S. Yu
Lifang He
Yangqiu Song
GNNAAML
420
349
0
26 Dec 2018
Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual
  Transfer and Beyond
Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond
Mikel Artetxe
Holger Schwenk
3DV
373
1,099
0
26 Dec 2018
Analysis Methods in Neural Language Processing: A Survey
Analysis Methods in Neural Language Processing: A Survey
Yonatan Belinkov
James R. Glass
278
594
0
21 Dec 2018
nocaps: novel object captioning at scale
nocaps: novel object captioning at scale
Harsh Agrawal
Karan Desai
Yufei Wang
Xinlei Chen
Rishabh Jain
Mark Johnson
Dhruv Batra
Devi Parikh
Stefan Lee
Peter Anderson
VLM
367
585
0
20 Dec 2018
Conditional BERT Contextual Augmentation
Conditional BERT Contextual Augmentation
Xing Wu
Shangwen Lv
Liangjun Zang
Jizhong Han
Songlin Hu
204
340
0
17 Dec 2018
Practical Text Classification With Large Pre-Trained Language Models
Practical Text Classification With Large Pre-Trained Language Models
Neel Kant
Raul Puri
Nikolai Yakovenko
Bryan Catanzaro
VLM
108
75
0
04 Dec 2018
Non-entailed subsequences as a challenge for natural language inference
Non-entailed subsequences as a challenge for natural language inference
R. Thomas McCoy
Tal Linzen
124
18
0
29 Nov 2018
Analyzing Compositionality-Sensitivity of NLI Models
Analyzing Compositionality-Sensitivity of NLI ModelsAAAI Conference on Artificial Intelligence (AAAI), 2018
Yixin Nie
Yicheng Wang
Joey Tianyi Zhou
CoGe
213
82
0
16 Nov 2018
Combining Axiom Injection and Knowledge Base Completion for Efficient
  Natural Language Inference
Combining Axiom Injection and Knowledge Base Completion for Efficient Natural Language Inference
Masashi Yoshikawa
K. Mineshima
Hiroshi Noji
D. Bekki
131
12
0
15 Nov 2018
How Reasonable are Common-Sense Reasoning Tasks: A Case-Study on the
  Winograd Schema Challenge and SWAG
How Reasonable are Common-Sense Reasoning Tasks: A Case-Study on the Winograd Schema Challenge and SWAG
P. Trichelair
Ali Emami
Adam Trischler
Kaheer Suleman
Jackie C.K. Cheung
LRM
201
44
0
05 Nov 2018
Sentence Encoders on STILTs: Supplementary Training on Intermediate
  Labeled-data Tasks
Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks
Jason Phang
Thibault Févry
Samuel R. Bowman
354
480
0
02 Nov 2018
CommonsenseQA: A Question Answering Challenge Targeting Commonsense
  Knowledge
CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge
Alon Talmor
Jonathan Herzig
Nicholas Lourie
Jonathan Berant
RALM
341
2,155
0
02 Nov 2018
Dialogue Natural Language Inference
Dialogue Natural Language Inference
Sean Welleck
Jason Weston
Arthur Szlam
Dong Wang
HILM
225
262
0
01 Nov 2018
A Simple Recurrent Unit with Reduced Tensor Product Representations
A Simple Recurrent Unit with Reduced Tensor Product Representations
Shuai Tang
P. Smolensky
V. D. Sa
253
2
0
29 Oct 2018
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLMSSLSSeg
2.9K
108,708
0
11 Oct 2018
Language Modeling Teaches You More Syntax than Translation Does: Lessons
  Learned Through Auxiliary Task Analysis
Language Modeling Teaches You More Syntax than Translation Does: Lessons Learned Through Auxiliary Task Analysis
Kelly W. Zhang
Samuel R. Bowman
216
72
0
26 Sep 2018
Meta-Embedding as Auxiliary Task Regularization
Meta-Embedding as Auxiliary Task Regularization
J. Ó. Neill
Danushka Bollegala
SSL
202
9
0
16 Sep 2018
XNLI: Evaluating Cross-lingual Sentence Representations
XNLI: Evaluating Cross-lingual Sentence Representations
Alexis Conneau
Guillaume Lample
Ruty Rinott
Adina Williams
Samuel R. Bowman
Holger Schwenk
Veselin Stoyanov
ELM
333
1,531
0
13 Sep 2018
Trick Me If You Can: Human-in-the-loop Generation of Adversarial
  Examples for Question Answering
Trick Me If You Can: Human-in-the-loop Generation of Adversarial Examples for Question Answering
Eric Wallace
Pedro Rodriguez
Shi Feng
Ikuya Yamada
Jordan L. Boyd-Graber
AAML
356
20
0
07 Sep 2018
Previous
123...959697
Next