ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.11274
  4. Cited By
Compression based bound for non-compressed network: unified
  generalization error analysis of large compressible deep neural network

Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network

25 September 2019
Taiji Suzuki
Hiroshi Abe
Tomoaki Nishimura
    AI4CE
ArXivPDFHTML

Papers citing "Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network"

12 / 12 papers shown
Title
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
Arthur Jacot
Seok Hoan Choi
Yuxiao Wen
AI4CE
91
2
0
08 Jul 2024
Generalization Guarantees via Algorithm-dependent Rademacher Complexity
Generalization Guarantees via Algorithm-dependent Rademacher Complexity
Sarah Sachs
T. Erven
Liam Hodgkinson
Rajiv Khanna
Umut Simsekli
21
3
0
04 Jul 2023
Proximity to Losslessly Compressible Parameters
Proximity to Losslessly Compressible Parameters
Matthew Farrugia-Roberts
30
0
0
05 Jun 2023
Koopman-based generalization bound: New aspect for full-rank weights
Koopman-based generalization bound: New aspect for full-rank weights
Yuka Hashimoto
Sho Sonoda
Isao Ishikawa
Atsushi Nitanda
Taiji Suzuki
6
2
0
12 Feb 2023
Generalization Bounds with Data-dependent Fractal Dimensions
Generalization Bounds with Data-dependent Fractal Dimensions
Benjamin Dupuis
George Deligiannidis
Umut cSimcsekli
AI4CE
33
12
0
06 Feb 2023
Deep neural networks with dependent weights: Gaussian Process mixture
  limit, heavy tails, sparsity and compressibility
Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility
Hoileong Lee
Fadhel Ayed
Paul Jung
Juho Lee
Hongseok Yang
François Caron
40
10
0
17 May 2022
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another
  in Neural Networks
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks
Xin Yu
Thiago Serra
Srikumar Ramalingam
Shandian Zhe
36
48
0
09 Mar 2022
Intrinsic Dimension, Persistent Homology and Generalization in Neural
  Networks
Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks
Tolga Birdal
Aaron Lou
Leonidas J. Guibas
Umut cSimcsekli
16
60
0
25 Nov 2021
Fractal Structure and Generalization Properties of Stochastic
  Optimization Algorithms
Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms
A. Camuto
George Deligiannidis
Murat A. Erdogdu
Mert Gurbuzbalaban
Umut cSimcsekli
Lingjiong Zhu
25
29
0
09 Jun 2021
Generalization bounds via distillation
Generalization bounds via distillation
Daniel J. Hsu
Ziwei Ji
Matus Telgarsky
Lan Wang
FedML
13
32
0
12 Apr 2021
Decomposable-Net: Scalable Low-Rank Compression for Neural Networks
Decomposable-Net: Scalable Low-Rank Compression for Neural Networks
A. Yaguchi
Taiji Suzuki
Shuhei Nitta
Y. Sakata
A. Tanizawa
17
9
0
29 Oct 2019
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
119
577
0
27 Feb 2015
1