ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.11921
  4. Cited By
Compression-aware Training of Neural Networks using Frank-Wolfe
v1v2 (latest)

Compression-aware Training of Neural Networks using Frank-Wolfe

24 May 2022
Max Zimmer
Christoph Spiegel
Sebastian Pokutta
ArXiv (abs)PDFHTML

Papers citing "Compression-aware Training of Neural Networks using Frank-Wolfe"

10 / 10 papers shown
Don't Be Greedy, Just Relax! Pruning LLMs via Frank-Wolfe
Don't Be Greedy, Just Relax! Pruning LLMs via Frank-Wolfe
Christophe Roux
Max Zimmer
Alexandre d’Aspremont
Sebastian Pokutta
188
2
0
15 Oct 2025
Compression Aware Certified Training
Compression Aware Certified Training
Changming Xu
Gagandeep Singh
200
0
0
13 Jun 2025
Approximating Latent Manifolds in Neural Networks via Vanishing Ideals
Approximating Latent Manifolds in Neural Networks via Vanishing Ideals
Nico Pelleriti
Max Zimmer
Elias Wirth
Sebastian Pokutta
380
4
0
20 Feb 2025
Implicit Bias in Matrix Factorization and its Explicit Realization in a New Architecture
Implicit Bias in Matrix Factorization and its Explicit Realization in a New Architecture
Yikun Hou
Suvrit Sra
A. Yurtsever
374
0
0
27 Jan 2025
PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
Max Zimmer
Megi Andoni
Christoph Spiegel
Sebastian Pokutta
VLM
568
16
0
23 Dec 2023
ELSA: Partial Weight Freezing for Overhead-Free Sparse Network
  Deployment
ELSA: Partial Weight Freezing for Overhead-Free Sparse Network Deployment
Paniz Halvachi
Alexandra Peste
Dan Alistarh
Christoph H. Lampert
196
0
0
11 Dec 2023
Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging
Sparse Model Soups: A Recipe for Improved Pruning via Model AveragingInternational Conference on Learning Representations (ICLR), 2023
Max Zimmer
Christoph Spiegel
Sebastian Pokutta
MoMe
529
22
0
29 Jun 2023
Low Rank Optimization for Efficient Deep Learning: Making A Balance
  between Compact Architecture and Fast Training
Low Rank Optimization for Efficient Deep Learning: Making A Balance between Compact Architecture and Fast TrainingJournal of Systems Engineering and Electronics (JSEE), 2023
Xinwei Ou
Zhangxin Chen
Ce Zhu
Yipeng Liu
246
11
0
22 Mar 2023
CrAM: A Compression-Aware Minimizer
CrAM: A Compression-Aware MinimizerInternational Conference on Learning Representations (ICLR), 2022
Alexandra Peste
Adrian Vladu
Eldar Kurtic
Christoph H. Lampert
Dan Alistarh
333
11
0
28 Jul 2022
Renormalized Sparse Neural Network Pruning
Renormalized Sparse Neural Network Pruning
Michael Rawson
135
0
0
21 Jun 2022
1
Page 1 of 1