ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1307.1493
  4. Cited By
Dropout Training as Adaptive Regularization

Dropout Training as Adaptive Regularization

4 July 2013
Stefan Wager
Sida I. Wang
Percy Liang
ArXivPDFHTML

Papers citing "Dropout Training as Adaptive Regularization"

37 / 87 papers shown
Title
PANDA: AdaPtive Noisy Data Augmentation for Regularization of Undirected
  Graphical Models
PANDA: AdaPtive Noisy Data Augmentation for Regularization of Undirected Graphical Models
Yinan Li
Xiao Liu
Fang Liu
29
7
0
11 Oct 2018
Extracting representations of cognition across neuroimaging studies
  improves brain decoding
Extracting representations of cognition across neuroimaging studies improves brain decoding
A. Mensch
Julien Mairal
B. Thirion
Gaël Varoquaux
AI4CE
27
15
0
17 Sep 2018
Towards Understanding Regularization in Batch Normalization
Towards Understanding Regularization in Batch Normalization
Ping Luo
Xinjiang Wang
Wenqi Shao
Zhanglin Peng
MLT
AI4CE
23
179
0
04 Sep 2018
On the Implicit Bias of Dropout
On the Implicit Bias of Dropout
Poorya Mianjy
R. Arora
René Vidal
27
66
0
26 Jun 2018
Boulevard: Regularized Stochastic Gradient Boosted Trees and Their
  Limiting Distribution
Boulevard: Regularized Stochastic Gradient Boosted Trees and Their Limiting Distribution
Yichen Zhou
Giles Hooker
UQCV
23
11
0
26 Jun 2018
Data augmentation instead of explicit regularization
Data augmentation instead of explicit regularization
Alex Hernández-García
Peter König
30
141
0
11 Jun 2018
Excitation Dropout: Encouraging Plasticity in Deep Neural Networks
Excitation Dropout: Encouraging Plasticity in Deep Neural Networks
Andrea Zunino
Sarah Adel Bargal
Pietro Morerio
Jianming Zhang
Stan Sclaroff
Vittorio Murino
21
23
0
23 May 2018
Faster Neural Network Training with Approximate Tensor Operations
Faster Neural Network Training with Approximate Tensor Operations
Menachem Adelman
Kfir Y. Levy
Ido Hakimi
M. Silberstein
29
26
0
21 May 2018
Noisin: Unbiased Regularization for Recurrent Neural Networks
Noisin: Unbiased Regularization for Recurrent Neural Networks
Adji Bousso Dieng
Rajesh Ranganath
Jaan Altosaar
David M. Blei
22
22
0
03 May 2018
Posterior Concentration for Sparse Deep Learning
Posterior Concentration for Sparse Deep Learning
Nicholas G. Polson
Veronika Rockova
UQCV
BDL
30
88
0
24 Mar 2018
Towards Principled Design of Deep Convolutional Networks: Introducing
  SimpNet
Towards Principled Design of Deep Convolutional Networks: Introducing SimpNet
S. H. HasanPour
Mohammad Rouhani
Mohsen Fayyaz
Mohammad Sabokrou
Ehsan Adeli
50
45
0
17 Feb 2018
The Hybrid Bootstrap: A Drop-in Replacement for Dropout
The Hybrid Bootstrap: A Drop-in Replacement for Dropout
R. Kosar
D. W. Scott
BDL
26
1
0
22 Jan 2018
Learning Neural Representations of Human Cognition across Many fMRI
  Studies
Learning Neural Representations of Human Cognition across Many fMRI Studies
G. Flandin
D. Handwerker
Michael Hanke
D. Keator
Thomas E. Nichols
AI4CE
24
44
0
31 Oct 2017
EndNet: Sparse AutoEncoder Network for Endmember Extraction and
  Hyperspectral Unmixing
EndNet: Sparse AutoEncoder Network for Endmember Extraction and Hyperspectral Unmixing
Savas Ozkan
Berk Kaya
G. Akar
23
190
0
06 Aug 2017
Curriculum Dropout
Curriculum Dropout
Pietro Morerio
Jacopo Cavazza
Riccardo Volpi
René Vidal
Vittorio Murino
ODL
28
101
0
18 Mar 2017
Missing Data Imputation for Supervised Learning
Missing Data Imputation for Supervised Learning
Jason Poulos
Rafael Valle
15
62
0
28 Oct 2016
Structured Dropout for Weak Label and Multi-Instance Learning and Its
  Application to Score-Informed Source Separation
Structured Dropout for Weak Label and Multi-Instance Learning and Its Application to Score-Informed Source Separation
Sebastian Ewert
Mark Sandler
19
23
0
15 Sep 2016
Lets keep it simple, Using simple architectures to outperform deeper and
  more complex architectures
Lets keep it simple, Using simple architectures to outperform deeper and more complex architectures
S. H. HasanPour
Mohammad Rouhani
Mohsen Fayyaz
Mohammad Sabokrou
18
118
0
22 Aug 2016
Regularization for Unsupervised Deep Neural Nets
Regularization for Unsupervised Deep Neural Nets
Baiyang Wang
Diego Klabjan
BDL
23
25
0
15 Aug 2016
Relative Natural Gradient for Learning Large Complex Models
Relative Natural Gradient for Learning Large Complex Models
Ke Sun
Frank Nielsen
29
5
0
20 Jun 2016
On Complex Valued Convolutional Neural Networks
On Complex Valued Convolutional Neural Networks
Nitzan Guberman
CVBM
17
133
0
29 Feb 2016
Ensemble Robustness and Generalization of Stochastic Deep Learning
  Algorithms
Ensemble Robustness and Generalization of Stochastic Deep Learning Algorithms
Tom Zahavy
Bingyi Kang
Alex Sivak
Jiashi Feng
Huan Xu
Shie Mannor
OOD
AAML
39
12
0
07 Feb 2016
Improved Dropout for Shallow and Deep Learning
Improved Dropout for Shallow and Deep Learning
Zhe Li
Boqing Gong
Tianbao Yang
BDL
SyDa
30
79
0
06 Feb 2016
Semisupervised Autoencoder for Sentiment Analysis
Semisupervised Autoencoder for Sentiment Analysis
Shuangfei Zhai
Zhongfei Zhang
16
63
0
14 Dec 2015
Towards Dropout Training for Convolutional Neural Networks
Towards Dropout Training for Convolutional Neural Networks
Haibing Wu
Xiaodong Gu
23
298
0
01 Dec 2015
Conditional Computation in Neural Networks for faster models
Conditional Computation in Neural Networks for faster models
Emmanuel Bengio
Pierre-Luc Bacon
Joelle Pineau
Doina Precup
AI4CE
23
315
0
19 Nov 2015
On the interplay of network structure and gradient convergence in deep
  learning
On the interplay of network structure and gradient convergence in deep learning
V. Ithapu
Sathya Ravi
Vikas Singh
18
3
0
17 Nov 2015
A Primer on Neural Network Models for Natural Language Processing
A Primer on Neural Network Models for Natural Language Processing
Yoav Goldberg
AI4CE
50
1,128
0
02 Oct 2015
A Scale Mixture Perspective of Multiplicative Noise in Neural Networks
A Scale Mixture Perspective of Multiplicative Noise in Neural Networks
Eric T. Nalisnick
Anima Anandkumar
Padhraic Smyth
27
19
0
10 Jun 2015
DART: Dropouts meet Multiple Additive Regression Trees
DART: Dropouts meet Multiple Additive Regression Trees
Rashmi Korlakai Vinayak
Ran Gilad-Bachrach
31
189
0
07 May 2015
A Bayesian encourages dropout
A Bayesian encourages dropout
S. Maeda
BDL
35
45
0
22 Dec 2014
Neural Network Regularization via Robust Weight Factorization
Neural Network Regularization via Robust Weight Factorization
Jan Rudy
Weiguang Ding
Daniel Jiwoong Im
Graham W. Taylor
OOD
42
6
0
20 Dec 2014
Learning with Pseudo-Ensembles
Learning with Pseudo-Ensembles
Philip Bachman
O. Alsharif
Doina Precup
25
594
0
16 Dec 2014
On the Inductive Bias of Dropout
On the Inductive Bias of Dropout
D. Helmbold
Philip M. Long
36
71
0
15 Dec 2014
Collaborative Deep Learning for Recommender Systems
Collaborative Deep Learning for Recommender Systems
Hao Wang
Naiyan Wang
Dit-Yan Yeung
BDL
39
1,610
0
10 Sep 2014
An empirical analysis of dropout in piecewise linear networks
An empirical analysis of dropout in piecewise linear networks
David Warde-Farley
Ian Goodfellow
Aaron Courville
Yoshua Bengio
51
106
0
21 Dec 2013
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,638
0
03 Jul 2012
Previous
12