Learning finitely correlated states: stability of the spectral reconstruction

Matrix product operators allow efficient descriptions (or realizations) of states on a 1D lattice. We consider the task of learning a realization of minimal dimension from copies of an unknown state, such that the resulting operator is close to the density matrix in trace norm. For finitely correlated translation-invariant states on an infinite chain, a realization of minimal dimension can be exactly reconstructed via linear algebra operations from the marginals of a size depending on the representation dimension. We establish a bound on the trace norm error for an algorithm that estimates a candidate realization from estimates of these marginals and outputs a matrix product operator, estimating the state of a chain of arbitrary length . This bound allows us to establish an upper bound on the sample complexity of the learning task, with an explicit dependence on the site dimension, realization dimension and spectral properties of a certain map constructed from the state. A refined error bound can be proven for -finitely correlated states, which have an operational interpretation in terms of sequential quantum channels applied to the memory system. We can also obtain an analogous error bound for a class of matrix product density operators on a finite chain reconstructible by local marginals. In this case, a linear number of marginals must be estimated, obtaining a sample complexity of . The learning algorithm also works for states that are sufficiently close to a finitely correlated state, with the potential of providing competitive algorithms for other interesting families of states.
View on arXiv@article{fanizza2025_2312.07516, title={ Learning finitely correlated states: stability of the spectral reconstruction }, author={ Marco Fanizza and Niklas Galke and Josep Lumbreras and Cambyse Rouzé and Andreas Winter }, journal={arXiv preprint arXiv:2312.07516}, year={ 2025 } }