ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.05211
  4. Cited By
A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models

A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models

11 October 2022
Yuanxin Liu
Fandong Meng
Zheng Lin
JiangNan Li
Peng Fu
Yanan Cao
Weiping Wang
Jie Zhou
ArXivPDFHTML

Papers citing "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models"

5 / 5 papers shown
Title
Not Eliminate but Aggregate: Post-Hoc Control over Mixture-of-Experts to
  Address Shortcut Shifts in Natural Language Understanding
Not Eliminate but Aggregate: Post-Hoc Control over Mixture-of-Experts to Address Shortcut Shifts in Natural Language Understanding
Ukyo Honda
Tatsushi Oka
Peinan Zhang
Masato Mita
36
1
0
17 Jun 2024
Robust Natural Language Understanding with Residual Attention Debiasing
Robust Natural Language Understanding with Residual Attention Debiasing
Fei Wang
James Y. Huang
Tianyi Yan
Wenxuan Zhou
Muhao Chen
29
10
0
28 May 2023
Task-Specific Skill Localization in Fine-tuned Language Models
Task-Specific Skill Localization in Fine-tuned Language Models
A. Panigrahi
Nikunj Saunshi
Haoyu Zhao
Sanjeev Arora
MoMe
21
66
0
13 Feb 2023
UPop: Unified and Progressive Pruning for Compressing Vision-Language
  Transformers
UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers
Dachuan Shi
Chaofan Tao
Ying Jin
Zhendong Yang
Chun Yuan
Jiaqi Wang
VLM
ViT
18
38
0
31 Jan 2023
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
Tianlong Chen
Jonathan Frankle
Shiyu Chang
Sijia Liu
Yang Zhang
Zhangyang Wang
Michael Carbin
148
345
0
23 Jul 2020
1