ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.07970
  4. Cited By
Training Deep Neural Networks Without Batch Normalization

Training Deep Neural Networks Without Batch Normalization

18 August 2020
D. Gaur
Joachim Folz
Andreas Dengel
    ODL
ArXiv (abs)PDFHTML

Papers citing "Training Deep Neural Networks Without Batch Normalization"

3 / 3 papers shown
Title
TinyCL: An Efficient Hardware Architecture for Continual Learning on Autonomous Systems
TinyCL: An Efficient Hardware Architecture for Continual Learning on Autonomous Systems
Eugenio Ressa
Alberto Marchisio
Maurizio Martina
Guido Masera
Mohamed Bennai
129
0
0
15 Feb 2024
Learning from Randomly Initialized Neural Network Features
Learning from Randomly Initialized Neural Network Features
Ehsan Amid
Rohan Anil
W. Kotłowski
Manfred K. Warmuth
MLT
74
15
0
13 Feb 2022
On Feature Decorrelation in Self-Supervised Learning
On Feature Decorrelation in Self-Supervised Learning
Tianyu Hua
Wenxiao Wang
Zihui Xue
Sucheng Ren
Yue Wang
Hang Zhao
SSLOOD
193
197
0
02 May 2021
1