Trade-off between Gradient Measurement Efficiency and Expressivity in Deep Quantum Neural Networks

Quantum neural networks (QNNs) require an efficient training algorithm to achieve practical quantum advantages. A promising approach is gradient-based optimization, where gradients are estimated by quantum measurements. However, QNNs currently lack general quantum algorithms for efficiently measuring gradients, which limits their scalability. To elucidate the fundamental limits and potentials of efficient gradient estimation, we rigorously prove a trade-off between gradient measurement efficiency (the mean number of simultaneously measurable gradient components) and expressivity in deep QNNs. This trade-off indicates that more expressive QNNs require higher measurement costs per parameter for gradient estimation, while reducing QNN expressivity to suit a given task can increase gradient measurement efficiency. We further propose a general QNN ansatz called the stabilizer-logical product ansatz (SLPA), which achieves the trade-off upper bound by exploiting the symmetric structure of the quantum circuit. Numerical experiments show that the SLPA drastically reduces the sample complexity needed for training while maintaining accuracy and trainability compared to well-designed circuits based on the parameter-shift method.
View on arXiv@article{chinzei2025_2406.18316, title={ Trade-off between Gradient Measurement Efficiency and Expressivity in Deep Quantum Neural Networks }, author={ Koki Chinzei and Shinichiro Yamano and Quoc Hoan Tran and Yasuhiro Endo and Hirotaka Oshima }, journal={arXiv preprint arXiv:2406.18316}, year={ 2025 } }