ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.13418
57
42
v1v2 (latest)

Avoiding Barren Plateaus with Classical Deep Neural Networks

26 May 2022
Lucas Friedrich
Jonas Maziero
ArXiv (abs)PDFHTML
Abstract

Variational quantum algorithms (VQAs) are among the most promising algorithms in the era of Noisy Intermediate Scale Quantum Devices. Such algorithms are constructed using a parameterization U(θ\pmb{\theta}θ) with a classical optimizer that updates the parameters θ\pmb{\theta}θ in order to minimize a cost function CCC. For this task, in general the gradient descent method, or one of its variants, is used. This is a method where the circuit parameters are updated iteratively using the cost function gradient. However, several works in the literature have shown that this method suffers from a phenomenon known as the Barren Plateaus (BP). In this work, we propose a new method to mitigate BPs. In general, the parameters θ\pmb{\theta}θ used in the parameterization UUU are randomly generated. In our method they are obtained from a classical neural network (CNN). We show that this method, besides to being able to mitigate BPs during startup, is also able to mitigate the effect of BPs during the VQA training. In addition, we also show how this method behaves for different CNN architectures.

View on arXiv
Comments on this paper