ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.01113
42
0

SCSegamba: Lightweight Structure-Aware Vision Mamba for Crack Segmentation in Structures

3 March 2025
Hui Liu
Chen Jia
Fan Shi
Xu Cheng
Shengyong Chen
    Mamba
ArXivPDFHTML
Abstract

Pixel-level segmentation of structural cracks across various scenarios remains a considerable challenge. Current methods encounter challenges in effectively modeling crack morphology and texture, facing challenges in balancing segmentation quality with low computational resource usage. To overcome these limitations, we propose a lightweight Structure-Aware Vision Mamba Network (SCSegamba), capable of generating high-quality pixel-level segmentation maps by leveraging both the morphological information and texture cues of crack pixels with minimal computational cost. Specifically, we developed a Structure-Aware Visual State Space module (SAVSS), which incorporates a lightweight Gated Bottleneck Convolution (GBC) and a Structure-Aware Scanning Strategy (SASS). The key insight of GBC lies in its effectiveness in modeling the morphological information of cracks, while the SASS enhances the perception of crack topology and texture by strengthening the continuity of semantic information between crack pixels. Experiments on crack benchmark datasets demonstrate that our method outperforms other state-of-the-art (SOTA) methods, achieving the highest performance with only 2.8M parameters. On the multi-scenario dataset, our method reached 0.8390 in F1 score and 0.8479 in mIoU. The code is available atthis https URL.

View on arXiv
@article{liu2025_2503.01113,
  title={ SCSegamba: Lightweight Structure-Aware Vision Mamba for Crack Segmentation in Structures },
  author={ Hui Liu and Chen Jia and Fan Shi and Xu Cheng and Shengyong Chen },
  journal={arXiv preprint arXiv:2503.01113},
  year={ 2025 }
}
Comments on this paper