ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.16502
18
4

LOGCAN++: Adaptive Local-global class-aware network for semantic segmentation of remote sensing imagery

24 June 2024
Xiaowen Ma
Rongrong Lian
Zhenkai Wu
Hongbo Guo
Mengting Ma
Sensen Wu
Zhenhong Du
Siyang Song
Wei Zhang
ArXivPDFHTML
Abstract

Remote sensing images usually characterized by complex backgrounds, scale and orientation variations, and large intra-class variance. General semantic segmentation methods usually fail to fully investigate the above issues, and thus their performances on remote sensing image segmentation are limited. In this paper, we propose our LOGCAN++, a semantic segmentation model customized for remote sensing images, which is made up of a Global Class Awareness (GCA) module and several Local Class Awareness (LCA) modules. The GCA module captures global representations for class-level context modeling to reduce the interference of background noise. The LCA module generates local class representations as intermediate perceptual elements to indirectly associate pixels with the global class representations, targeting at dealing with the large intra-class variance problem. In particular, we introduce affine transformations in the LCA module for adaptive extraction of local class representations to effectively tolerate scale and orientation variations in remotely sensed images. Extensive experiments on three benchmark datasets show that our LOGCAN++ outperforms current mainstream general and remote sensing semantic segmentation methods and achieves a better trade-off between speed and accuracy. Code is available atthis https URL.

View on arXiv
@article{ma2025_2406.16502,
  title={ LOGCAN++: Adaptive Local-global class-aware network for semantic segmentation of remote sensing imagery },
  author={ Xiaowen Ma and Rongrong Lian and Zhenkai Wu and Hongbo Guo and Mengting Ma and Sensen Wu and Zhenhong Du and Siyang Song and Wei Zhang },
  journal={arXiv preprint arXiv:2406.16502},
  year={ 2025 }
}
Comments on this paper