ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.09025
84
91
v1v2v3 (latest)

Online Learning of Quantum States

25 February 2018
S. Aaronson
Xinyi Chen
Elad Hazan
Satyen Kale
A. Nayak
ArXiv (abs)PDFHTML
Abstract

Suppose we have many copies of an unknown nnn-qubit state ρ\rhoρ. We measure some copies of ρ\rhoρ using a known two-outcome measurement E1E_{1}E1​, then other copies using a measurement E2E_{2}E2​, and so on. At each stage ttt, we generate a current hypothesis σt\sigma_{t}σt​ about the state ρ\rhoρ, using the outcomes of the previous measurements. We show that it is possible to do this in a way that guarantees that ∣Tr⁡(Eiσt)−Tr⁡(Eiρ)∣|\operatorname{Tr}(E_{i} \sigma_{t}) - \operatorname{Tr}(E_{i}\rho) |∣Tr(Ei​σt​)−Tr(Ei​ρ)∣, the error in our prediction for the next measurement, is at least ε\varepsilonε at most O⁡ ⁣(n/ε2)\operatorname{O}\!\left(n / \varepsilon^2 \right) O(n/ε2) times. Even in the "non-realizable" setting---where there could be arbitrary noise in the measurement outcomes---we show how to output hypothesis states that do significantly worse than the best possible states at most O⁡ ⁣(Tn)\operatorname{O}\!\left(\sqrt {Tn}\right) O(Tn​) times on the first TTT measurements. These results generalize a 2007 theorem by Aaronson on the PAC-learnability of quantum states, to the online and regret-minimization settings. We give three different ways to prove our results---using convex optimization, quantum postselection, and sequential fat-shattering dimension---which have different advantages in terms of parameters and portability.

View on arXiv
Comments on this paper