ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1606.01305
275
322
v1v2v3v4 (latest)

Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations

International Conference on Learning Representations (ICLR), 2016
3 June 2016
David M. Krueger
Tegan Maharaj
János Kramár
Mohammad Pezeshki
Nicolas Ballas
Nan Rosemary Ke
Anirudh Goyal
Yoshua Bengio
Aaron Courville
C. Pal
ArXiv (abs)PDFHTML
Abstract

We propose zoneout, a novel method for regularizing RNNs. At each timestep, zoneout stochastically forces some hidden units to maintain their previous values. Like dropout, zoneout uses random noise to train a pseudo-ensemble, improving generalization. But by preserving instead of dropping hidden units, gradient information and state information are more readily propagated through time, as in feedforward stochastic depth networks. We perform an empirical investigation of various RNN regularizers, and find that zoneout gives significant performance improvements across tasks. We achieve competitive results with relatively simple models in character- and word-level language modelling on the Penn Treebank and Text8 datasets, and combining with recurrent batch normalization yields state-of-the-art results on permuted sequential MNIST.

View on arXiv
Comments on this paper