ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.07461
  4. Cited By
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
v1v2v3 (latest)

GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding

20 April 2018
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
    ELM
ArXiv (abs)PDFHTML

Papers citing "GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding"

50 / 4,808 papers shown
On the Brittleness of CLIP Text Encoders
On the Brittleness of CLIP Text Encoders
Allie Tran
Luca Rossetto
205
0
0
06 Nov 2025
A systematic review of relation extraction task since the emergence of Transformers
A systematic review of relation extraction task since the emergence of Transformers
Ringwald Celian
Gandon
Fabien
Faron Catherine
Michel Franck
Abi Akl Hanna
237
0
0
05 Nov 2025
From Insight to Exploit: Leveraging LLM Collaboration for Adaptive Adversarial Text Generation
From Insight to Exploit: Leveraging LLM Collaboration for Adaptive Adversarial Text GenerationConference on Empirical Methods in Natural Language Processing (EMNLP), 2025
Najrin Sultana
Md Rafi Ur Rashid
Kang Gu
Shagufta Mehnaz
AAML
141
0
0
05 Nov 2025
ConMeZO: Adaptive Descent-Direction Sampling for Gradient-Free Finetuning of Large Language Models
ConMeZO: Adaptive Descent-Direction Sampling for Gradient-Free Finetuning of Large Language Models
Lejs Deen Behric
Liang Zhang
Bingcong Li
K. K. Thekumparampil
138
0
0
04 Nov 2025
Why Should the Server Do It All?: A Scalable, Versatile, and Model-Agnostic Framework for Server-Light DNN Inference over Massively Distributed Clients via Training-Free Intermediate Feature Compression
Why Should the Server Do It All?: A Scalable, Versatile, and Model-Agnostic Framework for Server-Light DNN Inference over Massively Distributed Clients via Training-Free Intermediate Feature Compression
Mingyu Sung
Suhwan Im
Daeho Bang
Il-Min Kim
Sangseok Yun
Jae-Mo Kang
95
0
0
03 Nov 2025
AthenaBench: A Dynamic Benchmark for Evaluating LLMs in Cyber Threat Intelligence
AthenaBench: A Dynamic Benchmark for Evaluating LLMs in Cyber Threat Intelligence
Md Tanvirul Alam
Dipkamal Bhusal
Salman Ahmad
Nidhi Rastogi
Peter Worth
ELM
214
0
0
03 Nov 2025
Training with Fewer Bits: Unlocking Edge LLMs Training with Stochastic Rounding
Training with Fewer Bits: Unlocking Edge LLMs Training with Stochastic RoundingConference on Empirical Methods in Natural Language Processing (EMNLP), 2025
Taowen Liu
Marta Andronic
Deniz Gündüz
George A. Constantinides
MQ
147
0
0
02 Nov 2025
TriCon-Fair: Triplet Contrastive Learning for Mitigating Social Bias in Pre-trained Language Models
TriCon-Fair: Triplet Contrastive Learning for Mitigating Social Bias in Pre-trained Language Models
Chong Lyu
Lin Li
Shiqing Wu
Jingling Yuan
134
0
0
02 Nov 2025
Transformers as Intrinsic Optimizers: Forward Inference through the Energy Principle
Transformers as Intrinsic Optimizers: Forward Inference through the Energy Principle
Ruifeng Ren
Sheng Ouyang
Huayi Tang
Yong Liu
145
0
0
02 Nov 2025
With Privacy, Size Matters: On the Importance of Dataset Size in Differentially Private Text Rewriting
With Privacy, Size Matters: On the Importance of Dataset Size in Differentially Private Text Rewriting
Stephen Meisenbacher
Florian Matthes
162
0
0
01 Nov 2025
Exploring and Mitigating Gender Bias in Encoder-Based Transformer Models
Exploring and Mitigating Gender Bias in Encoder-Based Transformer Models
Ariyan Hossain
Khondokar Mohammad Ahanaf Hannan
Rakinul Haque
Nowreen Tarannum Rafa
Humayra Musarrat
Shoaib Ahmed Dipu
Farig Yousuf Sadeque
127
0
0
01 Nov 2025
Efficiency vs. Alignment: Investigating Safety and Fairness Risks in Parameter-Efficient Fine-Tuning of LLMs
Efficiency vs. Alignment: Investigating Safety and Fairness Risks in Parameter-Efficient Fine-Tuning of LLMs
Mina Taraghi
Y. Pequignot
Amin Nikanjam
Mohamed Amine Merzouk
Foutse Khomh
ALM
327
0
0
01 Nov 2025
Learning an Efficient Optimizer via Hybrid-Policy Sub-Trajectory Balance
Learning an Efficient Optimizer via Hybrid-Policy Sub-Trajectory Balance
Yunchuan Guan
Yu Liu
Ke Zhou
Hui Li
Sen Jia
...
Ziyang Wang
X. Zhang
Tao Chen
Jenq-Neng Hwang
Lei Li
124
0
0
01 Nov 2025
Why Federated Optimization Fails to Achieve Perfect Fitting? A Theoretical Perspective on Client-Side Optima
Why Federated Optimization Fails to Achieve Perfect Fitting? A Theoretical Perspective on Client-Side Optima
Zhongxiang Lei
Qi Yang
Ping Qiu
Gang Zhang
Yuanchi Ma
Jinyan Liu
FedML
319
0
0
01 Nov 2025
Exploring Landscapes for Better Minima along Valleys
Exploring Landscapes for Better Minima along Valleys
Tong Zhao
Jiacheng Li
Yuanchang Zhou
Guangming Tan
Weile Jia
100
0
0
31 Oct 2025
Cross-Platform Evaluation of Reasoning Capabilities in Foundation Models
Cross-Platform Evaluation of Reasoning Capabilities in Foundation Models
J. Curtò
I. D. Zarzà
Pablo García
Jordi Cabot
ELMLRM
207
0
0
30 Oct 2025
Elastic Architecture Search for Efficient Language Models
Elastic Architecture Search for Efficient Language ModelsIEEE International Conference on Multimedia and Expo (ICME), 2025
Shang Wang
KELM
113
0
0
30 Oct 2025
OmniEduBench: A Comprehensive Chinese Benchmark for Evaluating Large Language Models in Education
OmniEduBench: A Comprehensive Chinese Benchmark for Evaluating Large Language Models in Education
Min Zhang
Hao Chen
Hao Chen
Wenqi Zhang
Didi Zhu
Xin Lin
Bo Jiang
Aimin Zhou
Fei Wu
Kun Kuang
ELM
161
0
0
30 Oct 2025
Detecting Anomalies in Machine Learning Infrastructure via Hardware Telemetry
Detecting Anomalies in Machine Learning Infrastructure via Hardware Telemetry
Ziji Chen
Steven W. D. Chien
Peng Qian
Noa Zilberman
101
0
0
29 Oct 2025
AttnCache: Accelerating Self-Attention Inference for LLM Prefill via Attention Cache
AttnCache: Accelerating Self-Attention Inference for LLM Prefill via Attention CacheIACR Cryptology ePrint Archive (IACR ePrint), 2025
Dinghong Song
Yuan Feng
Y. Wang
S. Chen
Cyril Guyot
F. Blagojevic
Hyeran Jeon
Pengfei Su
Dong Li
214
0
0
29 Oct 2025
Testing Cross-Lingual Text Comprehension In LLMs Using Next Sentence Prediction
Testing Cross-Lingual Text Comprehension In LLMs Using Next Sentence Prediction
Ritesh Sunil Chavan
Jack Mostow
ELMLRM
205
0
0
29 Oct 2025
Calibrating and Rotating: A Unified Framework for Weight Conditioning in PEFT
Calibrating and Rotating: A Unified Framework for Weight Conditioning in PEFT
Da Chang
Peng Xue
Yu Li
Yongxiang Liu
P. Xu
Shixun Zhang
213
1
0
28 Oct 2025
ScaLoRA: Optimally Scaled Low-Rank Adaptation for Efficient High-Rank Fine-Tuning
ScaLoRA: Optimally Scaled Low-Rank Adaptation for Efficient High-Rank Fine-Tuning
Yilang Zhang
Xiaodong Yang
Y. Cai
G. Giannakis
140
0
0
27 Oct 2025
Beyond Higher Rank: Token-wise Input-Output Projections for Efficient Low-Rank Adaptation
Beyond Higher Rank: Token-wise Input-Output Projections for Efficient Low-Rank Adaptation
Shiwei Li
Xiandi Luo
Haozhao Wang
Xing Tang
Ziqiang Cui
Dugang Liu
Yuhua Li
Xiuqiang He
Ruixuan Li
131
1
0
27 Oct 2025
SwiftEmbed: Ultra-Fast Text Embeddings via Static Token Lookup for Real-Time Applications
SwiftEmbed: Ultra-Fast Text Embeddings via Static Token Lookup for Real-Time Applications
Edouard Lansiaux
Antoine Simonet
Eric Wiel
132
0
0
27 Oct 2025
SALSA: Single-pass Autoregressive LLM Structured Classification
SALSA: Single-pass Autoregressive LLM Structured Classification
Ruslan Berdichevsky
Shai Nahum-Gefen
Elad Ben Zaken
134
0
0
26 Oct 2025
Memory-based Language Models: An Efficient, Explainable, and Eco-friendly Approach to Large Language Modeling
Memory-based Language Models: An Efficient, Explainable, and Eco-friendly Approach to Large Language Modeling
Antal van den Bosch
Ainhoa Risco Patón
Teun Buijse
Peter Berck
Maarten van Gompel
73
0
0
25 Oct 2025
Typoglycemia under the Hood: Investigating Language Models' Understanding of Scrambled Words
Typoglycemia under the Hood: Investigating Language Models' Understanding of Scrambled Words
Gianluca Sperduti
Alejandro Moreo
99
0
0
24 Oct 2025
Model Merging with Functional Dual Anchors
Model Merging with Functional Dual Anchors
Kexuan Shi
Yandong Wen
Weiyang Liu
MoMe
272
0
0
24 Oct 2025
Efficient semantic uncertainty quantification in language models via diversity-steered sampling
Efficient semantic uncertainty quantification in language models via diversity-steered sampling
Ji Won Park
K. Cho
131
0
0
24 Oct 2025
$α$-LoRA: Effective Fine-Tuning via Base Model Rescaling
ααα-LoRA: Effective Fine-Tuning via Base Model Rescaling
Aymane El Firdoussi
El Mahdi Chayti
Mohamed El Amine Seddik
Martin Jaggi
136
0
0
24 Oct 2025
Estonian Native Large Language Model Benchmark
Estonian Native Large Language Model Benchmark
Helena Grete Lillepalu
Tanel Alumäe
ELM
116
0
0
24 Oct 2025
On the Detectability of LLM-Generated Text: What Exactly Is LLM-Generated Text?
On the Detectability of LLM-Generated Text: What Exactly Is LLM-Generated Text?
Mingmeng Geng
Thierry Poibeau
DeLMO
218
0
0
23 Oct 2025
Irish-BLiMP: A Linguistic Benchmark for Evaluating Human and Language Model Performance in a Low-Resource Setting
Irish-BLiMP: A Linguistic Benchmark for Evaluating Human and Language Model Performance in a Low-Resource Setting
Josh McGiff
Khanh-Tung Tran
William Mulcahy
Dáibhidh Ó Luinín
Jake Dalzell
Róisín Ní Bhroin
Adam Burke
Barry O'Sullivan
Hoang D. Nguyen
Nikola S. Nikolov
99
0
0
23 Oct 2025
\textsc{CantoNLU}: A benchmark for Cantonese natural language understanding
\textsc{CantoNLU}: A benchmark for Cantonese natural language understanding
Junghyun Min
York Hay Ng
Sophia Chan
Helena Shunhua Zhao
En-Shiun Annie Lee
ELM
123
0
0
23 Oct 2025
LM-mixup: Text Data Augmentation via Language Model based Mixup
LM-mixup: Text Data Augmentation via Language Model based Mixup
Zhijie Deng
Zhouan Shen
Ling Li
Yao Zhou
Zhaowei Zhu
Yanji He
Wei Wang
Jiaheng Wei
100
0
0
23 Oct 2025
Dialogue Is Not Enough to Make a Communicative BabyLM (But Neither Is Developmentally Inspired Reinforcement Learning)
Dialogue Is Not Enough to Make a Communicative BabyLM (But Neither Is Developmentally Inspired Reinforcement Learning)
Francesca Padovani
Bastian Bunzeck
Manar Ali
Omar Momen
Arianna Bisazza
Hendrik Buschmeier
Sina Zarrieß
ALM
118
1
0
23 Oct 2025
VeFA: Vector-Based Feature Space Adaptation for Robust Model Fine-Tuning
VeFA: Vector-Based Feature Space Adaptation for Robust Model Fine-Tuning
Peng Wang
Minghao Gu
Qiang Huang
129
0
0
22 Oct 2025
Do Prompts Reshape Representations? An Empirical Study of Prompting Effects on Embeddings
Do Prompts Reshape Representations? An Empirical Study of Prompting Effects on Embeddings
Cesar Gonzalez-Gutierrez
Dirk Hovy
149
0
0
22 Oct 2025
Knowledge Distillation of Uncertainty using Deep Latent Factor Model
Knowledge Distillation of Uncertainty using Deep Latent Factor Model
Sehyun Park
Jongjin Lee
Yunseop Shin
Ilsang Ohn
Yongdai Kim
UQCVBDL
384
0
0
22 Oct 2025
Latent Space Factorization in LoRA
Latent Space Factorization in LoRA
Shashi Kumar
Yacouba Kaloga
John Mitros
P. Motlícek
Ina Kodrasi
115
0
0
22 Oct 2025
BLiSS 1.0: Evaluating Bilingual Learner Competence in Second Language Small Language Models
BLiSS 1.0: Evaluating Bilingual Learner Competence in Second Language Small Language Models
Yuan Gao
Suchir Salhan
Andrew Caines
P. Buttery
Weiwei Sun
CLL
105
0
0
22 Oct 2025
Tibetan Language and AI: A Comprehensive Survey of Resources, Methods and Challenges
Tibetan Language and AI: A Comprehensive Survey of Resources, Methods and Challenges
Cheng Huang
Nyima Tashi
Fan Gao
Yutong Liu
J. Li
...
Guojie Tang
Xiangxiang Wang
Jia Zhang
Tsengdar J. Lee
Yongbin Yu
116
0
0
22 Oct 2025
Restoring Pruned Large Language Models via Lost Component Compensation
Restoring Pruned Large Language Models via Lost Component Compensation
Zijian Feng
Hanzhang Zhou
Zixiao Zhu
Tianjiao Li
Jia Jim Deryl Chua
Lee Onn Mak
Gee Wah Ng
Kezhi Mao
141
0
0
22 Oct 2025
Ensembling Pruned Attention Heads For Uncertainty-Aware Efficient Transformers
Ensembling Pruned Attention Heads For Uncertainty-Aware Efficient Transformers
Firas Gabetni
Giuseppe Curci
Andrea Pilzer
Subhankar Roy
Elisa Ricci
Gianni Franchi
140
1
0
21 Oct 2025
Heterogeneous Adversarial Play in Interactive Environments
Heterogeneous Adversarial Play in Interactive Environments
Manjie Xu
Xinyi Yang
Jiayu Zhan
Wei Liang
Chi Zhang
Yixin Zhu
153
0
0
21 Oct 2025
Scaling Laws Meet Model Architecture: Toward Inference-Efficient LLMs
Scaling Laws Meet Model Architecture: Toward Inference-Efficient LLMs
S. Bian
Tao Yu
Shivaram Venkataraman
Youngsuk Park
122
0
0
21 Oct 2025
Topoformer: brain-like topographic organization in Transformer language models through spatial querying and reweighting
Topoformer: brain-like topographic organization in Transformer language models through spatial querying and reweighting
Taha Binhuraib
Greta Tuckute
Nicholas M. Blauch
125
5
0
21 Oct 2025
Some Attention is All You Need for Retrieval
Some Attention is All You Need for Retrieval
Felix Michalak
Steven Abreu
89
0
0
21 Oct 2025
NeuroAda: Activating Each Neuron's Potential for Parameter-Efficient Fine-Tuning
NeuroAda: Activating Each Neuron's Potential for Parameter-Efficient Fine-Tuning
Zhi Zhang
Yixian Shen
Congfeng Cao
Ekaterina Shutova
171
0
0
21 Oct 2025
Previous
12345...959697
Next