ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.07461
  4. Cited By
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
v1v2v3 (latest)

GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding

20 April 2018
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
    ELM
ArXiv (abs)PDFHTML

Papers citing "GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding"

50 / 4,447 papers shown
Title
MoORE: SVD-based Model MoE-ization for Conflict- and Oblivion-Resistant Multi-Task Adaptation
MoORE: SVD-based Model MoE-ization for Conflict- and Oblivion-Resistant Multi-Task Adaptation
Shen Yuan
Yin Zheng
Taifeng Wang
Binbin Liu
Hongteng Xu
MoMe
28
0
0
01 Jul 2025
Efficient and Privacy-Preserving Soft Prompt Transfer for LLMs
Efficient and Privacy-Preserving Soft Prompt Transfer for LLMs
Xun Wang
Jing Xu
Franziska Boenisch
Michael Backes
Christopher A. Choquette-Choo
Adam Dziedzic
AAML
17
0
0
19 Jun 2025
SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity
SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity
Samir Khaki
Xiuyu Li
Junxian Guo
Ligeng Zhu
Chenfeng Xu
Konstantinos N. Plataniotis
Amir Yazdanbakhsh
Kurt Keutzer
Song Han
Zhijian Liu
15
0
0
19 Jun 2025
Bayesian Optimization over Bounded Domains with the Beta Product Kernel
Bayesian Optimization over Bounded Domains with the Beta Product Kernel
Huy Hoang Nguyen
Han Zhou
Matthew B. Blaschko
A. Tiulpin
7
0
0
19 Jun 2025
Finance Language Model Evaluation (FLaME)
Finance Language Model Evaluation (FLaME)
Glenn Matlin
Mika Okamoto
Huzaifa Pardawala
Yang Yang
Sudheer Chava
AIFinLRM
26
0
0
18 Jun 2025
Sequential Policy Gradient for Adaptive Hyperparameter Optimization
Sequential Policy Gradient for Adaptive Hyperparameter Optimization
Zheng Li
Jerry Q. Cheng
Huanying Gu
OffRL
12
0
0
18 Jun 2025
GuiLoMo: Allocating Expert Number and Rank for LoRA-MoE via Bilevel Optimization with GuidedSelection Vectors
GuiLoMo: Allocating Expert Number and Rank for LoRA-MoE via Bilevel Optimization with GuidedSelection Vectors
Hengyuan Zhang
Xinrong Chen
Yingmin Qiu
Xiao Liang
Ziyue Li
...
Weiping Li
Tong Mo
Wenyue Li
Hayden Kwok-Hay So
Ngai Wong
MoEALM
17
0
0
17 Jun 2025
Improving LoRA with Variational Learning
Improving LoRA with Variational Learning
Bai Cong
Nico Daheim
Yuesong Shen
Rio Yokota
Mohammad Emtiyaz Khan
Thomas Möllenhoff
19
0
0
17 Jun 2025
FedOne: Query-Efficient Federated Learning for Black-box Discrete Prompt Learning
FedOne: Query-Efficient Federated Learning for Black-box Discrete Prompt Learning
Ganyu Wang
Jinjie Fang
Maxwell J. Ying
Bin Gu
Xi Chen
Boyu Wang
Charles Ling
FedML
17
0
0
17 Jun 2025
When Does Meaning Backfire? Investigating the Role of AMRs in NLI
When Does Meaning Backfire? Investigating the Role of AMRs in NLI
Junghyun Min
Xiulin Yang
Shira Wein
LLMSV
34
0
0
17 Jun 2025
The Butterfly Effect: Neural Network Training Trajectories Are Highly Sensitive to Initial Conditions
The Butterfly Effect: Neural Network Training Trajectories Are Highly Sensitive to Initial Conditions
Devin Kwok
Gül Sena Altıntaş
Colin Raffel
David Rolnick
14
0
0
16 Jun 2025
Dynamic Context-oriented Decomposition for Task-aware Low-rank Adaptation with Less Forgetting and Faster Convergence
Dynamic Context-oriented Decomposition for Task-aware Low-rank Adaptation with Less Forgetting and Faster Convergence
Yibo Yang
Sihao Liu
Chuan Rao
Bang An
Tiancheng Shen
Philip Torr
Ming-Hsuan Yang
Bernard Ghanem
14
0
0
16 Jun 2025
Mitigating Safety Fallback in Editing-based Backdoor Injection on LLMs
Mitigating Safety Fallback in Editing-based Backdoor Injection on LLMs
Houcheng Jiang
Zetong Zhao
Junfeng Fang
Haokai Ma
Ruipeng Wang
Yang Deng
Xiang Wang
Xiangnan He
KELMAAML
19
0
0
16 Jun 2025
Fed-HeLLo: Efficient Federated Foundation Model Fine-Tuning with Heterogeneous LoRA Allocation
Fed-HeLLo: Efficient Federated Foundation Model Fine-Tuning with Heterogeneous LoRA Allocation
Zikai Zhang
Ping Liu
Jiahao Xu
Rui Hu
15
0
0
13 Jun 2025
OPT-BENCH: Evaluating LLM Agent on Large-Scale Search Spaces Optimization Problems
OPT-BENCH: Evaluating LLM Agent on Large-Scale Search Spaces Optimization Problems
Xiaozhe Li
Jixuan Chen
Xinyu Fang
Shengyuan Ding
Haodong Duan
Qingwen Liu
Kai-xiang Chen
LLMAGLRM
96
0
0
12 Jun 2025
Spelling-out is not Straightforward: LLMs' Capability of Tokenization from Token to Characters
Spelling-out is not Straightforward: LLMs' Capability of Tokenization from Token to Characters
Tatsuya Hiraoka
Kentaro Inui
111
0
0
12 Jun 2025
Efficiency Robustness of Dynamic Deep Learning Systems
Efficiency Robustness of Dynamic Deep Learning Systems
Ravishka Rathnasuriya
Tingxi Li
Zexin Xu
Zihe Song
Mirazul Haque
Simin Chen
Wei Yang
AAMLSILM
143
0
0
12 Jun 2025
Auto-Compressing Networks
Vaggelis Dorovatas
Georgios Paraskevopoulos
Alexandros Potamianos
62
0
0
11 Jun 2025
On-the-Fly Adaptive Distillation of Transformer to Dual-State Linear Attention
On-the-Fly Adaptive Distillation of Transformer to Dual-State Linear Attention
Yeonju Ro
Zhenyu Zhang
Souvik Kundu
Zhangyang Wang
Aditya Akella
85
0
0
11 Jun 2025
DIVE into MoE: Diversity-Enhanced Reconstruction of Large Language Models from Dense into Mixture-of-Experts
DIVE into MoE: Diversity-Enhanced Reconstruction of Large Language Models from Dense into Mixture-of-Experts
Yuchen Feng
Bowen Shen
Naibin Gu
Jiaxuan Zhao
Peng Fu
Zheng Lin
Weiping Wang
MoMeMoE
50
0
0
11 Jun 2025
Beyond Benchmarks: A Novel Framework for Domain-Specific LLM Evaluation and Knowledge Mapping
Beyond Benchmarks: A Novel Framework for Domain-Specific LLM Evaluation and Knowledge Mapping
Nitin Sharma
Thomas Wolfers
Çağatay Yıldız
ALM
10
0
0
09 Jun 2025
LoRMA: Low-Rank Multiplicative Adaptation for LLMs
LoRMA: Low-Rank Multiplicative Adaptation for LLMs
Harsh Bihany
Shubham Patel
Ashutosh Modi
12
0
0
09 Jun 2025
PrunePEFT: Iterative Hybrid Pruning for Parameter-Efficient Fine-tuning of LLMs
PrunePEFT: Iterative Hybrid Pruning for Parameter-Efficient Fine-tuning of LLMs
Tongzhou Yu
Zhuhao Zhang
Guanghui Zhu
Shen Jiang
Meikang Qiu
Yihua Huang
23
0
0
09 Jun 2025
ReCogDrive: A Reinforced Cognitive Framework for End-to-End Autonomous Driving
Yongkang Li
Kaixin Xiong
Xiangyu Guo
Fang Li
Sixu Yan
...
Bing Wang
Guang Chen
Hangjun Ye
Wenyu Liu
Xinggang Wang
VLM
35
0
0
09 Jun 2025
They want to pretend not to understand: The Limits of Current LLMs in Interpreting Implicit Content of Political Discourse
They want to pretend not to understand: The Limits of Current LLMs in Interpreting Implicit Content of Political Discourse
Walter Paci
Alessandro Panunzi
Sandro Pezzelle
21
0
0
07 Jun 2025
What Makes a Good Natural Language Prompt?
What Makes a Good Natural Language Prompt?
Do Xuan Long
Duy Dinh
Ngoc-Hai Nguyen
Kenji Kawaguchi
Nancy F. Chen
Shafiq Joty
Min-Yen Kan
19
0
0
07 Jun 2025
Eigenspectrum Analysis of Neural Networks without Aspect Ratio Bias
Eigenspectrum Analysis of Neural Networks without Aspect Ratio Bias
Yuanzhe Hu
Kinshuk Goel
Vlad Killiakov
Yaoqing Yang
59
2
0
06 Jun 2025
Come Together, But Not Right Now: A Progressive Strategy to Boost Low-Rank Adaptation
Come Together, But Not Right Now: A Progressive Strategy to Boost Low-Rank Adaptation
Zhan Zhuang
Xiequn Wang
Wei Li
Yulong Zhang
Qiushi Huang
...
Yanbin Wei
Yuhe Nie
Kede Ma
Yu Zhang
Ying Wei
53
0
0
06 Jun 2025
Gradient Similarity Surgery in Multi-Task Deep Learning
Gradient Similarity Surgery in Multi-Task Deep Learning
Thomas Borsani
Andrea Rosani
Giuseppe Nicosia
Giuseppe Di Fatta
MedIm
40
0
0
06 Jun 2025
A MISMATCHED Benchmark for Scientific Natural Language Inference
Firoz Shaik
Mobashir Sadat
Nikita Gautam
Doina Caragea
Cornelia Caragea
73
0
0
05 Jun 2025
MMTU: A Massive Multi-Task Table Understanding and Reasoning Benchmark
MMTU: A Massive Multi-Task Table Understanding and Reasoning Benchmark
Junjie Xing
Yeye He
Mengyu Zhou
Haoyu Dong
Shi Han
Lingjiao Chen
Dongmei Zhang
S. Chaudhuri
H. V. Jagadish
LMTDELMLRM
32
0
0
05 Jun 2025
Leveraging Self-Attention for Input-Dependent Soft Prompting in LLMs
Leveraging Self-Attention for Input-Dependent Soft Prompting in LLMs
Ananth Muppidi
Abhilash Nandy
Sambaran Bandyopadhyay
14
0
0
05 Jun 2025
RewardAnything: Generalizable Principle-Following Reward Models
RewardAnything: Generalizable Principle-Following Reward Models
Zhuohao Yu
Jiali Zeng
Weizheng Gu
Yidong Wang
Jindong Wang
Fandong Meng
Jie Zhou
Yue Zhang
Shikun Zhang
Wei Ye
LRM
97
1
0
04 Jun 2025
WeightLoRA: Keep Only Necessary Adapters
WeightLoRA: Keep Only Necessary Adapters
Andrey Veprikov
Vladimir Solodkin
Alexander Zyl
Andrey Savchenko
Aleksandr Beznosikov
57
0
0
03 Jun 2025
QKV Projections Require a Fraction of Their Memory
QKV Projections Require a Fraction of Their Memory
Malik Khalf
Yara Shamshoum
Nitzan Hodos
Yuval Sieradzki
Assaf Schuster
MQVLM
56
0
0
03 Jun 2025
FroM: Frobenius Norm-Based Data-Free Adaptive Model Merging
FroM: Frobenius Norm-Based Data-Free Adaptive Model Merging
Zijian Li
Xiaocheng Feng
Huixin Liu
Yichong Huang
Ting Liu
Bing Qin
MoMe
55
0
0
03 Jun 2025
Adaptive Task Vectors for Large Language Models
Adaptive Task Vectors for Large Language Models
Joonseong Kang
Soojeong Lee
Subeen Park
Sumin Park
Taero Kim
Jihee Kim
Ryunyi Lee
Kyungwoo Song
27
0
0
03 Jun 2025
PoLAR: Polar-Decomposed Low-Rank Adapter Representation
PoLAR: Polar-Decomposed Low-Rank Adapter Representation
Kai Lion
Liang Zhang
Bingcong Li
Niao He
53
0
0
03 Jun 2025
MLorc: Momentum Low-rank Compression for Large Language Model Adaptation
MLorc: Momentum Low-rank Compression for Large Language Model Adaptation
Wei Shen
Zhang Yaxiang
Minhui Huang
Mengfan Xu
Jiawei Zhang
Cong Shen
AI4CE
44
0
0
02 Jun 2025
Natural, Artificial, and Human Intelligences
Natural, Artificial, and Human Intelligences
E. Pothos
Dominic Widdows
16
0
0
02 Jun 2025
Taming LLMs by Scaling Learning Rates with Gradient Grouping
Taming LLMs by Scaling Learning Rates with Gradient Grouping
Siyuan Li
Juanxi Tian
Zedong Wang
Xin Jin
Zicheng Liu
Wentao Zhang
Dan Xu
35
0
0
01 Jun 2025
LIFT the Veil for the Truth: Principal Weights Emerge after Rank Reduction for Reasoning-Focused Supervised Fine-Tuning
LIFT the Veil for the Truth: Principal Weights Emerge after Rank Reduction for Reasoning-Focused Supervised Fine-Tuning
Zihang Liu
Tianyu Pang
Oleg Balabanov
Chaoqun Yang
Tianjin Huang
L. Yin
Yaoqing Yang
Shiwei Liu
LRM
37
1
0
01 Jun 2025
DefenderBench: A Toolkit for Evaluating Language Agents in Cybersecurity Environments
DefenderBench: A Toolkit for Evaluating Language Agents in Cybersecurity Environments
Chiyu Zhang
Marc-Alexandre Cote
Michael Albada
Anush Sankaran
Jack W. Stokes
Tong Wang
Amir H. Abdi
William Blum
Muhammad Abdul-Mageed
LLMAGAAMLELM
43
0
0
31 May 2025
Data Swarms: Optimizable Generation of Synthetic Evaluation Data
Data Swarms: Optimizable Generation of Synthetic Evaluation Data
Shangbin Feng
Yike Wang
Weijia Shi
Yulia Tsvetkov
36
0
0
31 May 2025
SCOUT: Teaching Pre-trained Language Models to Enhance Reasoning via Flow Chain-of-Thought
SCOUT: Teaching Pre-trained Language Models to Enhance Reasoning via Flow Chain-of-Thought
Guanghao Li
Wenhao Jiang
Mingfeng Chen
Yan Li
Hao Yu
Shuting Dong
Tao Ren
Ming Tang
Chun Yuan
ReLMLRM
23
0
0
30 May 2025
Chameleon: A Flexible Data-mixing Framework for Language Model Pretraining and Finetuning
Chameleon: A Flexible Data-mixing Framework for Language Model Pretraining and Finetuning
Wanyun Xie
F. Tonin
Volkan Cevher
24
0
0
30 May 2025
MetaFaith: Faithful Natural Language Uncertainty Expression in LLMs
MetaFaith: Faithful Natural Language Uncertainty Expression in LLMs
Gabrielle Kaili-May Liu
Gal Yona
Avi Caciularu
Idan Szpektor
Tim G. J. Rudner
Arman Cohan
26
0
0
30 May 2025
Diversity of Transformer Layers: One Aspect of Parameter Scaling Laws
Diversity of Transformer Layers: One Aspect of Parameter Scaling Laws
Hidetaka Kamigaito
Ying Zhang
Jingun Kwon
Katsuhiko Hayashi
Manabu Okumura
Taro Watanabe
MoE
44
1
0
29 May 2025
Decom-Renorm-Merge: Model Merging on the Right Space Improves Multitasking
Decom-Renorm-Merge: Model Merging on the Right Space Improves Multitasking
Yuatyong Chaichana
Thanapat Trachu
Peerat Limkonchotiwat
Konpat Preechakul
Tirasan Khandhawit
Ekapol Chuangsuwanich
MoMe
63
0
0
29 May 2025
MaCP: Minimal yet Mighty Adaptation via Hierarchical Cosine Projection
MaCP: Minimal yet Mighty Adaptation via Hierarchical Cosine Projection
Yixian Shen
Qi Bi
Jia-Hong Huang
Hongyi Zhu
Andy D. Pimentel
Anuj Pathania
15
0
0
29 May 2025
1234...878889
Next