Dynamic Memory Transformer for Hyperspectral Image Classification
Hyperspectral image (HSI) classification (HSIC) requires effective modeling of complex spatial-spectral dependencies under limited labeled data and high dimensionality. While transformer-based models have shown strong capability in capturing long-range contextual information, they often introduce redundant attention patterns, which limits their effectiveness for fine-grained HSI analysis. To address these challenges, this paper proposes MemFormer, a lightweight transformer architecture for HSIC that incorporates a dynamic memory-enhanced attention mechanism. The proposed design augments multi-head self-attention with a compact global memory module that progressively aggregates contextual information across layers, enabling efficient modeling of long-range dependencies while reducing attention redundancy. In addition, a Spatial-Spectral Positional Embedding (SSPE) is used to jointly encode spatial continuity and spectral ordering, providing structurally consistent representations without relying on convolution-based positional encodings. Extensive experiments conducted on three benchmark hyperspectral datasets, including Indian Pines, WHU-Hi-HanChuan, and WHU-Hi-HongHu, demonstrate that MemFormer achieves superior classification performance compared to representative convolutional, hybrid, and transformer-based methods. On the Indian Pines dataset, MemFormer attains an overall accuracy of up to 99.55\%, average accuracy of 99.38\%, and a coefficient of 99.49\%, highlighting its effectiveness and efficiency for HSIC.
View on arXiv