21

Learning Ordered Representations in Latent Space for Intrinsic Dimension Estimation via Principal Component Autoencoder

Qipeng Zhan
Zhuoping Zhou
Zexuan Wang
Li Shen
Main:8 Pages
2 Figures
Bibliography:2 Pages
12 Tables
Appendix:5 Pages
Abstract

Autoencoders have long been considered a nonlinear extension of Principal Component Analysis (PCA). Prior studies have demonstrated that linear autoencoders (LAEs) can recover the ordered, axis-aligned principal components of PCA by incorporating non-uniform 2\ell_2 regularization or by adjusting the loss function. However, these approaches become insufficient in the nonlinear setting, as the remaining variance cannot be properly captured independently of the nonlinear mapping. In this work, we propose a novel autoencoder framework that integrates non-uniform variance regularization with an isometric constraint. This design serves as a natural generalization of PCA, enabling the model to preserve key advantages, such as ordered representations and variance retention, while remaining effective for nonlinear dimensionality reduction tasks.

View on arXiv
Comments on this paper