ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1504.00641
  4. Cited By
A Probabilistic Theory of Deep Learning

A Probabilistic Theory of Deep Learning

2 April 2015
Ankit B. Patel
M. T. Nguyen
Richard G. Baraniuk
    BDL
    OOD
    UQCV
ArXivPDFHTML

Papers citing "A Probabilistic Theory of Deep Learning"

13 / 13 papers shown
Title
FedCLEAN: byzantine defense by CLustering Errors of Activation maps in Non-IID federated learning environments
FedCLEAN: byzantine defense by CLustering Errors of Activation maps in Non-IID federated learning environments
Mehdi Ben Ghali
Reda Bellafqira
Gouenou Coatrieux
AAML
FedML
48
0
0
21 Jan 2025
Probing the Latent Hierarchical Structure of Data via Diffusion Models
Probing the Latent Hierarchical Structure of Data via Diffusion Models
Antonio Sclocchi
Alessandro Favero
Noam Itzhak Levi
M. Wyart
DiffM
35
3
0
17 Oct 2024
The Neural Race Reduction: Dynamics of Abstraction in Gated Networks
The Neural Race Reduction: Dynamics of Abstraction in Gated Networks
Andrew M. Saxe
Shagun Sodhani
Sam Lewallen
AI4CE
30
34
0
21 Jul 2022
Face representation by deep learning: a linear encoding in a parameter
  space?
Face representation by deep learning: a linear encoding in a parameter space?
Qiulei Dong
Jiaying Sun
Zhanyi Hu
CVBM
15
1
0
22 Oct 2019
DeepSigns: A Generic Watermarking Framework for IP Protection of Deep
  Learning Models
DeepSigns: A Generic Watermarking Framework for IP Protection of Deep Learning Models
B. Rouhani
Huili Chen
F. Koushanfar
29
48
0
02 Apr 2018
A Probabilistic Framework for Deep Learning
A Probabilistic Framework for Deep Learning
Ankit B. Patel
M. T. Nguyen
Richard G. Baraniuk
BDL
21
67
0
06 Dec 2016
Distributed Sequence Memory of Multidimensional Inputs in Recurrent
  Networks
Distributed Sequence Memory of Multidimensional Inputs in Recurrent Networks
Adam S. Charles
Dong Yin
Christopher Rozell
GNN
33
20
0
26 May 2016
Training Neural Networks Without Gradients: A Scalable ADMM Approach
Training Neural Networks Without Gradients: A Scalable ADMM Approach
Gavin Taylor
R. Burmeister
Zheng Xu
Bharat Singh
Ankit B. Patel
Tom Goldstein
ODL
11
272
0
06 May 2016
A Simple Hierarchical Pooling Data Structure for Loop Closure
A Simple Hierarchical Pooling Data Structure for Loop Closure
Xiaohan Fei
Konstantine Tsotsos
Stefano Soatto
20
13
0
20 Nov 2015
On the energy landscape of deep networks
On the energy landscape of deep networks
Pratik Chaudhari
Stefano Soatto
ODL
40
27
0
20 Nov 2015
Why are deep nets reversible: A simple theory, with implications for
  training
Why are deep nets reversible: A simple theory, with implications for training
Sanjeev Arora
Yingyu Liang
Tengyu Ma
9
54
0
18 Nov 2015
On the interplay of network structure and gradient convergence in deep
  learning
On the interplay of network structure and gradient convergence in deep learning
V. Ithapu
Sathya Ravi
Vikas Singh
16
3
0
17 Nov 2015
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,638
0
03 Jul 2012
1