ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.02250
19
0

Why should autoencoders work?

3 October 2023
Matthew D. Kvalheim
E.D. Sontag
ArXivPDFHTML
Abstract

Deep neural network autoencoders are routinely used computationally for model reduction. They allow recognizing the intrinsic dimension of data that lie in a kkk-dimensional subset KKK of an input Euclidean space Rn\mathbb{R}^nRn. The underlying idea is to obtain both an encoding layer that maps Rn\mathbb{R}^nRn into Rk\mathbb{R}^kRk (called the bottleneck layer or the space of latent variables) and a decoding layer that maps Rk\mathbb{R}^kRk back into Rn\mathbb{R}^nRn, in such a way that the input data from the set KKK is recovered when composing the two maps. This is achieved by adjusting parameters (weights) in the network to minimize the discrepancy between the input and the reconstructed output. Since neural networks (with continuous activation functions) compute continuous maps, the existence of a network that achieves perfect reconstruction would imply that KKK is homeomorphic to a kkk-dimensional subset of Rk\mathbb{R}^kRk, so clearly there are topological obstructions to finding such a network. On the other hand, in practice the technique is found to "work" well, which leads one to ask if there is a way to explain this effectiveness. We show that, up to small errors, indeed the method is guaranteed to work. This is done by appealing to certain facts from differential topology. A computational example is also included to illustrate the ideas.

View on arXiv
Comments on this paper