Exploring Self-Attention for Crop-type Classification Explainability

Transformer models have become a promising approach for crop-type classification. Although their attention weights can be used to understand the relevant time points for crop disambiguation, the validity of these insights depends on how closely the attention weights approximate the actual workings of these black-box models, which is not always clear. In this paper, we introduce a novel explainability framework that systematically evaluates the explanatory power of the attention weights of a standard transformer encoder for crop-type classification. Our results show that attention patterns strongly relate to key dates, which are often associated with critical phenological events for crop-type classification. Further, the sensitivity analysis reveals the limited capability of the attention weights to characterize crop phenology as the identified phenological events depend on the other crops considered during training. This limitation highlights the relevance of future work towards the development of deep learning approaches capable of automatically learning the temporal vegetation dynamics for accurate crop disambiguation
View on arXiv@article{obadic2025_2210.13167, title={ Exploring Self-Attention for Crop-type Classification Explainability }, author={ Ivica Obadic and Ribana Roscher and Dario Augusto Borges Oliveira and Xiao Xiang Zhu }, journal={arXiv preprint arXiv:2210.13167}, year={ 2025 } }