269

Comb Tensor Networks vs. Matrix Product States: Enhanced Efficiency in High-Dimensional Spaces

Main:3 Pages
1 Figures
Bibliography:1 Pages
Abstract

Modern approaches to generative modeling of continuous data using tensor networks incorporate compression layers to capture the most meaningful features of high-dimensional inputs. These methods, however, rely on traditional Matrix Product States (MPS) architectures. Here, we demonstrate that beyond a certain threshold in data and bond dimensions, a comb-shaped tensor network architecture can yield more efficient contractions than a standard MPS. This finding suggests that for continuous and high-dimensional data distributions, transitioning from MPS to a comb tensor network representation can substantially reduce computational overhead while maintaining accuracy.

View on arXiv
Comments on this paper