ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.15996
  4. Cited By
TextPruner: A Model Pruning Toolkit for Pre-Trained Language Models

TextPruner: A Model Pruning Toolkit for Pre-Trained Language Models

30 March 2022
Ziqing Yang
Yiming Cui
Zhigang Chen
    SyDa
    VLM
ArXivPDFHTML

Papers citing "TextPruner: A Model Pruning Toolkit for Pre-Trained Language Models"

8 / 8 papers shown
Title
Large Language Model Pruning
Large Language Model Pruning
Hanjuan Huang
Hao-Jia Song
H. Pao
38
0
0
24 May 2024
RLHF Deciphered: A Critical Analysis of Reinforcement Learning from
  Human Feedback for LLMs
RLHF Deciphered: A Critical Analysis of Reinforcement Learning from Human Feedback for LLMs
Shreyas Chaudhari
Pranjal Aggarwal
Vishvak Murahari
Tanmay Rajpurohit
A. Kalyan
Karthik Narasimhan
A. Deshpande
Bruno Castro da Silva
23
34
0
12 Apr 2024
NASH: A Simple Unified Framework of Structured Pruning for Accelerating
  Encoder-Decoder Language Models
NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models
Jongwoo Ko
Seungjoon Park
Yujin Kim
Sumyeong Ahn
Du-Seong Chang
Euijai Ahn
SeYoung Yun
14
4
0
16 Oct 2023
Improving Non-autoregressive Translation Quality with Pretrained
  Language Model, Embedding Distillation and Upsampling Strategy for CTC
Improving Non-autoregressive Translation Quality with Pretrained Language Model, Embedding Distillation and Upsampling Strategy for CTC
Shensian Syu
Jun Xie
Hung-yi Lee
23
0
0
10 Jun 2023
MUX-PLMs: Data Multiplexing for High-throughput Language Models
MUX-PLMs: Data Multiplexing for High-throughput Language Models
Vishvak Murahari
A. Deshpande
Carlos E. Jimenez
Izhak Shafran
Mingqiu Wang
Yuan Cao
Karthik Narasimhan
MoE
13
5
0
24 Feb 2023
Gradient-based Intra-attention Pruning on Pre-trained Language Models
Gradient-based Intra-attention Pruning on Pre-trained Language Models
Ziqing Yang
Yiming Cui
Xin Yao
Shijin Wang
VLM
24
8
0
15 Dec 2022
Legal-Tech Open Diaries: Lesson learned on how to develop and deploy
  light-weight models in the era of humongous Language Models
Legal-Tech Open Diaries: Lesson learned on how to develop and deploy light-weight models in the era of humongous Language Models
Stelios Maroudas
Sotiris Legkas
Prodromos Malakasiotis
Ilias Chalkidis
VLM
AILaw
ALM
ELM
29
4
0
24 Oct 2022
What Do Compressed Multilingual Machine Translation Models Forget?
What Do Compressed Multilingual Machine Translation Models Forget?
Alireza Mohammadshahi
Vassilina Nikoulina
Alexandre Berard
Caroline Brun
James Henderson
Laurent Besacier
AI4CE
40
9
0
22 May 2022
1