ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.14864
  4. Cited By
Just CHOP: Embarrassingly Simple LLM Compression

Just CHOP: Embarrassingly Simple LLM Compression

24 May 2023
A. Jha
Tom Sherborne
Evan Pete Walsh
Dirk Groeneveld
Emma Strubell
Iz Beltagy
ArXivPDFHTML

Papers citing "Just CHOP: Embarrassingly Simple LLM Compression"

5 / 5 papers shown
Title
SD$^2$: Self-Distilled Sparse Drafters
SD2^22: Self-Distilled Sparse Drafters
Mike Lasby
Nish Sinnadurai
Valavan Manohararajah
Sean Lie
Vithursan Thangarasa
65
0
0
10 Apr 2025
Persistent Topological Features in Large Language Models
Persistent Topological Features in Large Language Models
Yuri Gardinazzi
Giada Panerai
Karthik Viswanathan
A. Ansuini
Alberto Cazzaniga
Matteo Biagetti
39
2
0
14 Oct 2024
SliceGPT: Compress Large Language Models by Deleting Rows and Columns
SliceGPT: Compress Large Language Models by Deleting Rows and Columns
Saleh Ashkboos
Maximilian L. Croci
Marcelo Gennari do Nascimento
Torsten Hoefler
James Hensman
VLM
125
143
0
26 Jan 2024
Training Recipe for N:M Structured Sparsity with Decaying Pruning Mask
Training Recipe for N:M Structured Sparsity with Decaying Pruning Mask
Sheng-Chun Kao
Amir Yazdanbakhsh
Suvinay Subramanian
Shivani Agrawal
Utku Evci
T. Krishna
40
12
0
15 Sep 2022
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
Tianlong Chen
Jonathan Frankle
Shiyu Chang
Sijia Liu
Yang Zhang
Zhangyang Wang
Michael Carbin
148
345
0
23 Jul 2020
1