ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.03944
15
60

Structural and Statistical Texture Knowledge Distillation for Semantic Segmentation

6 May 2023
Deyi Ji
Haoran Wang
Mingyuan Tao
Jianqiang Huang
Xiansheng Hua
Hongtao Lu
ArXivPDFHTML
Abstract

Existing knowledge distillation works for semantic segmentation mainly focus on transferring high-level contextual knowledge from teacher to student. However, low-level texture knowledge is also of vital importance for characterizing the local structural pattern and global statistical property, such as boundary, smoothness, regularity and color contrast, which may not be well addressed by high-level deep features. In this paper, we are intended to take full advantage of both structural and statistical texture knowledge and propose a novel Structural and Statistical Texture Knowledge Distillation (SSTKD) framework for semantic segmentation. Specifically, for structural texture knowledge, we introduce a Contourlet Decomposition Module (CDM) that decomposes low-level features with iterative Laplacian pyramid and directional filter bank to mine the structural texture knowledge. For statistical knowledge, we propose a Denoised Texture Intensity Equalization Module (DTIEM) to adaptively extract and enhance statistical texture knowledge through heuristics iterative quantization and denoised operation. Finally, each knowledge learning is supervised by an individual loss function, forcing the student network to mimic the teacher better from a broader perspective. Experiments show that the proposed method achieves state-of-the-art performance on Cityscapes, Pascal VOC 2012 and ADE20K datasets.

View on arXiv
@article{ji2025_2305.03944,
  title={ Structural and Statistical Texture Knowledge Distillation for Semantic Segmentation },
  author={ Deyi Ji and Haoran Wang and Mingyuan Tao and Jianqiang Huang and Xian-Sheng Hua and Hongtao Lu },
  journal={arXiv preprint arXiv:2305.03944},
  year={ 2025 }
}
Comments on this paper