ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.20256
19
0

Optimizing Hard Thresholding for Sparse Model Discovery

28 April 2025
Derek W. Jollie
Scott G. McCalla
ArXivPDFHTML
Abstract

Many model selection algorithms rely on sparse dictionary learning to provide interpretable and physics-based governing equations. The optimization algorithms typically use a hard thresholding process to enforce sparse activations in the model coefficients by removing library elements from consideration. By introducing an annealing scheme that reactivates a fraction of the removed terms with a cooling schedule, we are able to improve the performance of these sparse learning algorithms. We concentrate on two approaches to the optimization, SINDy, and an alternative using hard thresholding pursuit. We see in both cases that annealing can improve model accuracy. The effectiveness of annealing is demonstrated through comparisons on several nonlinear systems pulled from convective flows, excitable systems, and population dynamics. Finally we apply these algorithms to experimental data for projectile motion.

View on arXiv
@article{jollie2025_2504.20256,
  title={ Optimizing Hard Thresholding for Sparse Model Discovery },
  author={ Derek W. Jollie and Scott G. McCalla },
  journal={arXiv preprint arXiv:2504.20256},
  year={ 2025 }
}
Comments on this paper