19
0

Low-Loss Space in Neural Networks is Continuous and Fully Connected

Abstract

Visualizations of the loss landscape in neural networks suggest that minima are isolated points. However, both theoretical and empirical studies indicate that it is possible to connect two different minima with a path consisting of intermediate points that also have low loss. In this study, we propose a new algorithm which investigates low-loss paths in the full parameter space, not only between two minima. Our experiments on LeNet5, ResNet18, and Compact Convolutional Transformer architectures consistently demonstrate the existence of such continuous paths in the parameter space. These results suggest that the low-loss region is a fully connected and continuous space in the parameter space. Our findings provide theoretical insight into neural network over-parameterization, highlighting that parameters collectively define a high-dimensional low-loss space, implying parameter redundancy exists only within individual models and not throughout the entire low-loss space. Additionally, our work also provides new visualization methods and opportunities to improve model generalization by exploring the low-loss space that is closer to the origin.

View on arXiv
@article{tian2025_2505.02604,
  title={ Low-Loss Space in Neural Networks is Continuous and Fully Connected },
  author={ Yongding Tian and Zaid Al-Ars and Maksim Kitsak and Peter Hofstee },
  journal={arXiv preprint arXiv:2505.02604},
  year={ 2025 }
}
Comments on this paper