ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.08759
51
0

QUIET-SR: Quantum Image Enhancement Transformer for Single Image Super-Resolution

11 March 2025
Siddhant Dutta
Nouhaila Innan
Khadijeh Najafi
Sadok Ben Yahia
Muhammad Shafique
ArXivPDFHTML
Abstract

Recent advancements in Single-Image Super-Resolution (SISR) using deep learning have significantly improved image restoration quality. However, the high computational cost of processing high-resolution images due to the large number of parameters in classical models, along with the scalability challenges of quantum algorithms for image processing, remains a major obstacle. In this paper, we propose the Quantum Image Enhancement Transformer for Super-Resolution (QUIET-SR), a hybrid framework that extends the Swin transformer architecture with a novel shifted quantum window attention mechanism, built upon variational quantum neural networks. QUIET-SR effectively captures complex residual mappings between low-resolution and high-resolution images, leveraging quantum attention mechanisms to enhance feature extraction and image restoration while requiring a minimal number of qubits, making it suitable for the Noisy Intermediate-Scale Quantum (NISQ) era. We evaluate our framework in MNIST (30.24 PSNR, 0.989 SSIM), FashionMNIST (29.76 PSNR, 0.976 SSIM) and the MedMNIST dataset collection, demonstrating that QUIET-SR achieves PSNR and SSIM scores comparable to state-of-the-art methods while using fewer parameters. These findings highlight the potential of scalable variational quantum machine learning models for SISR, marking a step toward practical quantum-enhanced image super-resolution.

View on arXiv
@article{dutta2025_2503.08759,
  title={ QUIET-SR: Quantum Image Enhancement Transformer for Single Image Super-Resolution },
  author={ Siddhant Dutta and Nouhaila Innan and Khadijeh Najafi and Sadok Ben Yahia and Muhammad Shafique },
  journal={arXiv preprint arXiv:2503.08759},
  year={ 2025 }
}
Comments on this paper