Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1811.01753
Cited By
How deep is deep enough? -- Quantifying class separability in the hidden layers of deep neural networks
5 November 2018
Junhong Lin
C. Metzner
Andreas K. Maier
V. Cevher
Holger Schulze
Patrick Krauss
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How deep is deep enough? -- Quantifying class separability in the hidden layers of deep neural networks"
7 / 7 papers shown
Title
Classifying Overlapping Gaussian Mixtures in High Dimensions: From Optimal Classifiers to Neural Nets
Khen Cohen
Noam Levi
Yaron Oz
BDL
31
1
0
28 May 2024
SCHEME: Scalable Channel Mixer for Vision Transformers
Deepak Sridhar
Yunsheng Li
Nuno Vasconcelos
33
0
0
01 Dec 2023
Quantifying the Variability Collapse of Neural Networks
Jing-Xue Xu
Haoxiong Liu
33
4
0
06 Jun 2023
Classification at the Accuracy Limit -- Facing the Problem of Data Ambiguity
C. Metzner
A. Schilling
M. Traxdorf
K. Tziridis
Holger Schulze
P. Krauss
17
11
0
04 Jun 2022
Neural Network based Successor Representations of Space and Language
Paul Stoewer
Christian Schlieker
A. Schilling
C. Metzner
Andreas K. Maier
P. Krauss
6
18
0
22 Feb 2022
A Novel Intrinsic Measure of Data Separability
Shuyue Guan
Murray H. Loew
13
14
0
11 Sep 2021
Softmax-based Classification is k-means Clustering: Formal Proof, Consequences for Adversarial Attacks, and Improvement through Centroid Based Tailoring
Sibylle Hess
W. Duivesteijn
D. Mocanu
17
12
0
07 Jan 2020
1