ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1502.02478
  4. Cited By
Efficient batchwise dropout training using submatrices

Efficient batchwise dropout training using submatrices

9 February 2015
Ben Graham
Jeremy Reizenstein
Leigh Robinson
ArXivPDFHTML

Papers citing "Efficient batchwise dropout training using submatrices"

4 / 4 papers shown
Title
DISTREAL: Distributed Resource-Aware Learning in Heterogeneous Systems
DISTREAL: Distributed Resource-Aware Learning in Heterogeneous Systems
Martin Rapp
R. Khalili
Kilian Pfeiffer
J. Henkel
24
18
0
16 Dec 2021
Faster Neural Network Training with Approximate Tensor Operations
Faster Neural Network Training with Approximate Tensor Operations
Menachem Adelman
Kfir Y. Levy
Ido Hakimi
M. Silberstein
31
26
0
21 May 2018
Improved Dropout for Shallow and Deep Learning
Improved Dropout for Shallow and Deep Learning
Zhe Li
Boqing Gong
Tianbao Yang
BDL
SyDa
30
79
0
06 Feb 2016
Dropout as data augmentation
Dropout as data augmentation
Xavier Bouthillier
K. Konda
Pascal Vincent
Roland Memisevic
43
133
0
29 Jun 2015
1