21
1

Quantum Algorithms for the Pathwise Lasso

Abstract

We present a novel quantum high-dimensional linear regression algorithm with an 1\ell_1-penalty based on the classical LARS (Least Angle Regression) pathwise algorithm. Similarly to available classical algorithms for Lasso, our quantum algorithm provides the full regularisation path as the penalty term varies, but quadratically faster per iteration under specific conditions. A quadratic speedup on the number of features dd is possible by using the simple quantum minimum-finding subroutine from Dürr and Hoyer (arXiv'96) in order to obtain the joining time at each iteration. We then improve upon this simple quantum algorithm and obtain a quadratic speedup both in the number of features dd and the number of observations nn by using the approximate quantum minimum-finding subroutine from Chen and de Wolf (ICALP'23). In order to do so, we approximately compute the joining times to be searched over by the approximate quantum minimum-finding subroutine. As another main contribution, we prove, via an approximate version of the KKT conditions and a duality gap, that the LARS algorithm (and therefore our quantum algorithm) is robust to errors. This means that it still outputs a path that minimises the Lasso cost function up to a small error if the joining times are only approximately computed. Furthermore, we show that, when the observations are sampled from a Gaussian distribution, our quantum algorithm's complexity only depends polylogarithmically on nn, exponentially better than the classical LARS algorithm, while keeping the quadratic improvement on dd. Moreover, we propose a dequantised version of our quantum algorithm that also retains the polylogarithmic dependence on nn, albeit presenting the linear scaling on dd from the standard LARS algorithm. Finally, we prove query lower bounds for classical and quantum Lasso algorithms.

View on arXiv
@article{doriguello2025_2312.14141,
  title={ Quantum Algorithms for the Pathwise Lasso },
  author={ Joao F. Doriguello and Debbie Lim and Chi Seng Pun and Patrick Rebentrost and Tushar Vaidya },
  journal={arXiv preprint arXiv:2312.14141},
  year={ 2025 }
}
Comments on this paper