ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.10901
154
8
v1v2v3 (latest)

Dynamic Narrowing of VAE Bottlenecks Using GECO and L0L_0L0​ Regularization

IEEE International Joint Conference on Neural Network (IJCNN), 2020
24 March 2020
Cedric De Boom
Samuel T. Wauthier
Tim Verbelen
Bart Dhoedt
    DRL
ArXiv (abs)PDFHTML
Abstract

When designing variational autoencoders (VAEs) or other types of latent space models, the dimensionality of the latent space is typically defined upfront. In this process, it is possible that the number of dimensions is under- or overprovisioned for the application at hand. In case the dimensionality is not predefined, this parameter is usually determined using time- and resource-consuming cross-validation. For these reasons we have developed a technique to shrink the latent space dimensionality of VAEs automatically and on-the-fly during training using Generalized ELBO with Constrained Optimization (GECO) and the L0L_0L0​-Augment-REINFORCE-Merge (L0L_0L0​-ARM) gradient estimator. The GECO optimizer ensures that we are not violating a predefined upper bound on the reconstruction error. This paper presents the algorithmic details of our method along with experimental results on five different datasets. We find that our training procedure is stable and that the latent space can be pruned effectively without violating the GECO constraints.

View on arXiv
Comments on this paper