ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.01835
12
0

Modulate and Reconstruct: Learning Hyperspectral Imaging from Misaligned Smartphone Views

2 July 2025
Daniil Reutsky
Daniil Vladimirov
Yasin Mamedov
Georgy Perevozchikov
Nancy Mehta
Egor Ershov
Radu Timofte
ArXiv (abs)PDFHTML
Main:12 Pages
15 Figures
Bibliography:5 Pages
7 Tables
Appendix:8 Pages
Abstract

Hyperspectral reconstruction (HSR) from RGB images is a fundamentally ill-posed problem due to severe spectral information loss. Existing approaches typically rely on a single RGB image, limiting reconstruction accuracy. In this work, we propose a novel multi-image-to-hyperspectral reconstruction (MI-HSR) framework that leverages a triple-camera smartphone system, where two lenses are equipped with carefully selected spectral filters. Our configuration, grounded in theoretical and empirical analysis, enables richer and more diverse spectral observations than conventional single-camera setups. To support this new paradigm, we introduce Doomer, the first dataset for MI-HSR, comprising aligned images from three smartphone cameras and a hyperspectral reference camera across diverse scenes. We show that the proposed HSR model achieves consistent improvements over existing methods on the newly proposed benchmark. In a nutshell, our setup allows 30% towards more accurately estimated spectra compared to an ordinary RGB camera. Our findings suggest that multi-view spectral filtering with commodity hardware can unlock more accurate and practical hyperspectral imaging solutions.

View on arXiv
@article{reutsky2025_2507.01835,
  title={ Modulate and Reconstruct: Learning Hyperspectral Imaging from Misaligned Smartphone Views },
  author={ Daniil Reutsky and Daniil Vladimirov and Yasin Mamedov and Georgy Perevozchikov and Nancy Mehta and Egor Ershov and Radu Timofte },
  journal={arXiv preprint arXiv:2507.01835},
  year={ 2025 }
}
Comments on this paper