ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.06866
21
0
v1v2 (latest)

SAFE: Finding Sparse and Flat Minima to Improve Pruning

7 June 2025
Dongyeop Lee
Kwanhee Lee
Jinseok Chung
Namhoon Lee
ArXiv (abs)PDFHTML
Main:8 Pages
5 Figures
Bibliography:4 Pages
12 Tables
Appendix:10 Pages
Abstract

Sparsifying neural networks often suffers from seemingly inevitable performance degradation, and it remains challenging to restore the original performance despite much recent progress. Motivated by recent studies in robust optimization, we aim to tackle this problem by finding subnetworks that are both sparse and flat at the same time. Specifically, we formulate pruning as a sparsity-constrained optimization problem where flatness is encouraged as an objective. We solve it explicitly via an augmented Lagrange dual approach and extend it further by proposing a generalized projection operation, resulting in novel pruning methods called SAFE and its extension, SAFE+^++. Extensive evaluations on standard image classification and language modeling tasks reveal that SAFE consistently yields sparse networks with improved generalization performance, which compares competitively to well-established baselines. In addition, SAFE demonstrates resilience to noisy data, making it well-suited for real-world conditions.

View on arXiv
@article{lee2025_2506.06866,
  title={ SAFE: Finding Sparse and Flat Minima to Improve Pruning },
  author={ Dongyeop Lee and Kwanhee Lee and Jinseok Chung and Namhoon Lee },
  journal={arXiv preprint arXiv:2506.06866},
  year={ 2025 }
}
Comments on this paper