ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.03147
  4. Cited By
Eigenpruning: an Interpretability-Inspired PEFT Method
v1v2v3v4v5 (latest)

Eigenpruning: an Interpretability-Inspired PEFT Method

4 April 2024
Tomás Vergara-Browne
Álvaro Soto
A. Aizawa
ArXiv (abs)PDFHTMLGithub (2★)

Papers citing "Eigenpruning: an Interpretability-Inspired PEFT Method"

1 / 1 papers shown
A Convex-optimization-based Layer-wise Post-training Pruner for Large
  Language Models
A Convex-optimization-based Layer-wise Post-training Pruner for Large Language Models
Pengxiang Zhao
Hanyu Hu
Ping Li
Yi Zheng
Zhefeng Wang
Xiaoming Yuan
250
2
0
07 Aug 2024
1
Page 1 of 1