ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.12801
17
0

Sign-In to the Lottery: Reparameterizing Sparse Training From Scratch

17 April 2025
Advait Gadhikar
Tom Jacobs
Chao Zhou
R. Burkholz
ArXivPDFHTML
Abstract

The performance gap between training sparse neural networks from scratch (PaI) and dense-to-sparse training presents a major roadblock for efficient deep learning. According to the Lottery Ticket Hypothesis, PaI hinges on finding a problem specific parameter initialization. As we show, to this end, determining correct parameter signs is sufficient. Yet, they remain elusive to PaI. To address this issue, we propose Sign-In, which employs a dynamic reparameterization that provably induces sign flips. Such sign flips are complementary to the ones that dense-to-sparse training can accomplish, rendering Sign-In as an orthogonal method. While our experiments and theory suggest performance improvements of PaI, they also carve out the main open challenge to close the gap between PaI and dense-to-sparse training.

View on arXiv
@article{gadhikar2025_2504.12801,
  title={ Sign-In to the Lottery: Reparameterizing Sparse Training From Scratch },
  author={ Advait Gadhikar and Tom Jacobs and Chao Zhou and Rebekka Burkholz },
  journal={arXiv preprint arXiv:2504.12801},
  year={ 2025 }
}
Comments on this paper