ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.09554
  4. Cited By
Deep Learning Meets Sparse Regularization: A Signal Processing
  Perspective

Deep Learning Meets Sparse Regularization: A Signal Processing Perspective

23 January 2023
Rahul Parhi
Robert D. Nowak
ArXivPDFHTML

Papers citing "Deep Learning Meets Sparse Regularization: A Signal Processing Perspective"

18 / 18 papers shown
Title
An Overview of Low-Rank Structures in the Training and Adaptation of Large Models
An Overview of Low-Rank Structures in the Training and Adaptation of Large Models
Laura Balzano
Tianjiao Ding
B. Haeffele
Soo Min Kwon
Qing Qu
Peng Wang
Z. Wang
Can Yaras
OffRL
AI4CE
62
0
0
25 Mar 2025
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Chris Kolb
T. Weber
Bernd Bischl
David Rügamer
113
0
0
04 Feb 2025
The Effects of Multi-Task Learning on ReLU Neural Network Functions
The Effects of Multi-Task Learning on ReLU Neural Network Functions
Julia B. Nakhleh
Joseph Shenouda
Robert D. Nowak
34
1
0
29 Oct 2024
ReTok: Replacing Tokenizer to Enhance Representation Efficiency in Large
  Language Model
ReTok: Replacing Tokenizer to Enhance Representation Efficiency in Large Language Model
Shuhao Gu
Mengdi Zhao
Bowen Zhang
Liangdong Wang
Jijie Li
Guang Liu
25
2
0
06 Oct 2024
Approaching Deep Learning through the Spectral Dynamics of Weights
Approaching Deep Learning through the Spectral Dynamics of Weights
David Yunis
Kumar Kshitij Patel
Samuel Wheeler
Pedro H. P. Savarese
Gal Vardi
Karen Livescu
Michael Maire
Matthew R. Walter
52
3
0
21 Aug 2024
ReLUs Are Sufficient for Learning Implicit Neural Representations
ReLUs Are Sufficient for Learning Implicit Neural Representations
Joseph Shenouda
Yamin Zhou
Robert D. Nowak
30
5
0
04 Jun 2024
Towards a Sampling Theory for Implicit Neural Representations
Towards a Sampling Theory for Implicit Neural Representations
Mahrokh Najaf
Gregory Ongie
26
0
0
28 May 2024
Random ReLU Neural Networks as Non-Gaussian Processes
Random ReLU Neural Networks as Non-Gaussian Processes
Rahul Parhi
Pakshal Bohra
Ayoub El Biari
Mehrsa Pourya
Michael Unser
63
1
0
16 May 2024
Efficient Algorithms for Regularized Nonnegative Scale-invariant Low-rank Approximation Models
Efficient Algorithms for Regularized Nonnegative Scale-invariant Low-rank Approximation Models
Jeremy E. Cohen
Valentin Leplat
63
1
0
27 Mar 2024
Applied Causal Inference Powered by ML and AI
Applied Causal Inference Powered by ML and AI
Victor Chernozhukov
Christian Hansen
Nathan Kallus
Martin Spindler
Vasilis Syrgkanis
CML
36
29
0
04 Mar 2024
Function-Space Optimality of Neural Architectures with Multivariate Nonlinearities
Function-Space Optimality of Neural Architectures with Multivariate Nonlinearities
Rahul Parhi
Michael Unser
39
5
0
05 Oct 2023
Weighted variation spaces and approximation by shallow ReLU networks
Weighted variation spaces and approximation by shallow ReLU networks
Ronald A. DeVore
Robert D. Nowak
Rahul Parhi
Jonathan W. Siegel
30
5
0
28 Jul 2023
Variation Spaces for Multi-Output Neural Networks: Insights on
  Multi-Task Learning and Network Compression
Variation Spaces for Multi-Output Neural Networks: Insights on Multi-Task Learning and Network Compression
Joseph Shenouda
Rahul Parhi
Kangwook Lee
Robert D. Nowak
31
12
0
25 May 2023
Penalising the biases in norm regularisation enforces sparsity
Penalising the biases in norm regularisation enforces sparsity
Etienne Boursier
Nicolas Flammarion
34
14
0
02 Mar 2023
Convergence Rates of Oblique Regression Trees for Flexible Function
  Libraries
Convergence Rates of Oblique Regression Trees for Flexible Function Libraries
M. D. Cattaneo
Rajita Chandak
Jason M. Klusowski
26
11
0
26 Oct 2022
From Kernel Methods to Neural Networks: A Unifying Variational
  Formulation
From Kernel Methods to Neural Networks: A Unifying Variational Formulation
M. Unser
48
7
0
29 Jun 2022
Understanding neural networks with reproducing kernel Banach spaces
Understanding neural networks with reproducing kernel Banach spaces
Francesca Bartolucci
E. De Vito
Lorenzo Rosasco
S. Vigogna
39
50
0
20 Sep 2021
Near-Minimax Optimal Estimation With Shallow ReLU Neural Networks
Near-Minimax Optimal Estimation With Shallow ReLU Neural Networks
Rahul Parhi
Robert D. Nowak
50
38
0
18 Sep 2021
1