ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.08239
46
1

EnergyFormer: Energy Attention with Fourier Embedding for Hyperspectral Image Classification

11 March 2025
S.
Muhammad Usama
Usman Ghous
Manuel Mazzara
Salvatore Distefano
Muhammad Ahmad
ArXivPDFHTML
Abstract

Hyperspectral imaging (HSI) provides rich spectral-spatial information across hundreds of contiguous bands, enabling precise material discrimination in applications such as environmental monitoring, agriculture, and urban analysis. However, the high dimensionality and spectral variability of HSI data pose significant challenges for feature extraction and classification. This paper presents EnergyFormer, a transformer-based framework designed to address these challenges through three key innovations: (1) Multi-Head Energy Attention (MHEA), which optimizes an energy function to selectively enhance critical spectral-spatial features, improving feature discrimination; (2) Fourier Position Embedding (FoPE), which adaptively encodes spectral and spatial dependencies to reinforce long-range interactions; and (3) Enhanced Convolutional Block Attention Module (ECBAM), which selectively amplifies informative wavelength bands and spatial structures, enhancing representation learning. Extensive experiments on the WHU-Hi-HanChuan, Salinas, and Pavia University datasets demonstrate that EnergyFormer achieves exceptional overall accuracies of 99.28\%, 98.63\%, and 98.72\%, respectively, outperforming state-of-the-art CNN, transformer, and Mamba-based models. The source code will be made available atthis https URL.

View on arXiv
@article{sohail2025_2503.08239,
  title={ EnergyFormer: Energy Attention with Fourier Embedding for Hyperspectral Image Classification },
  author={ Saad Sohail and Muhammad Usama and Usman Ghous and Manuel Mazzara and Salvatore Distefano and Muhammad Ahmad },
  journal={arXiv preprint arXiv:2503.08239},
  year={ 2025 }
}
Comments on this paper