ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.13242
19
0

Dynamic Memory-enhanced Transformer for Hyperspectral Image Classification

17 April 2025
Muhammad Ahmad
Manuel Mazzara
Salvatore Distefano
Adil Mehmood Khan
ArXivPDFHTML
Abstract

Hyperspectral image (HSI) classification remains a challenging task due to the intricate spatial-spectral correlations. Existing transformer models excel in capturing long-range dependencies but often suffer from information redundancy and attention inefficiencies, limiting their ability to model fine-grained relationships crucial for HSI classification. To overcome these limitations, this work proposes MemFormer, a lightweight and memory-enhanced transformer. MemFormer introduces a memory-enhanced multi-head attention mechanism that iteratively refines a dynamic memory module, enhancing feature extraction while reducing redundancy across layers. Additionally, a dynamic memory enrichment strategy progressively captures complex spatial and spectral dependencies, leading to more expressive feature representations. To further improve structural consistency, we incorporate a spatial-spectral positional encoding (SSPE) tailored for HSI data, ensuring continuity without the computational burden of convolution-based approaches. Extensive experiments on benchmark datasets demonstrate that MemFormer achieves superior classification accuracy, outperforming state-of-the-art methods.

View on arXiv
@article{ahmad2025_2504.13242,
  title={ Dynamic Memory-enhanced Transformer for Hyperspectral Image Classification },
  author={ Muhammad Ahmad and Manuel Mazzara and Salvatore Distefano and Adil Mehmood Khan },
  journal={arXiv preprint arXiv:2504.13242},
  year={ 2025 }
}
Comments on this paper