ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.22785
11
0
v1v2 (latest)

Navigating the Latent Space Dynamics of Neural Models

28 May 2025
Marco Fumero
Luca Moschella
Emanuele Rodolà
Francesco Locatello
ArXiv (abs)PDFHTML
Main:9 Pages
14 Figures
Bibliography:4 Pages
3 Tables
Appendix:9 Pages
Abstract

Neural networks transform high-dimensional data into compact, structured representations, often modeled as elements of a lower dimensional latent space. In this paper, we present an alternative interpretation of neural models as dynamical systems acting on the latent manifold. Specifically, we show that autoencoder models implicitly define a latent vector field on the manifold, derived by iteratively applying the encoding-decoding map, without any additional training. We observe that standard training procedures introduce inductive biases that lead to the emergence of attractor points within this vector field. Drawing on this insight, we propose to leverage the vector field as a representation for the network, providing a novel tool to analyze the properties of the model and the data. This representation enables to: (i) analyze the generalization and memorization regimes of neural models, even throughout training; (ii) extract prior knowledge encoded in the network's parameters from the attractors, without requiring any input data; (iii) identify out-of-distribution samples from their trajectories in the vector field. We further validate our approach on vision foundation models, showcasing the applicability and effectiveness of our method in real-world scenarios.

View on arXiv
@article{fumero2025_2505.22785,
  title={ Navigating the Latent Space Dynamics of Neural Models },
  author={ Marco Fumero and Luca Moschella and Emanuele Rodolà and Francesco Locatello },
  journal={arXiv preprint arXiv:2505.22785},
  year={ 2025 }
}
Comments on this paper