ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.15850
14
0

Reflection Invariance Learning for Few-shot Semantic Segmentation

1 June 2023
Qinglong Cao
Yuntian Chen
Chao Ma
Xiaokang Yang
ArXivPDFHTML
Abstract

Few-shot semantic segmentation (FSS) aims to segment objects of unseen classes in query images with only a few annotated support images. Existing FSS algorithms typically focus on mining category representations from the single-view support to match semantic objects of the single-view query. However, the limited annotated samples render the single-view matching struggle to perceive the reflection invariance of novel objects, which results in a restricted learning space for novel categories and further induces a biased segmentation with demoted parsing performance. To address this challenge, this paper proposes a fresh few-shot segmentation framework to mine the reflection invariance in a multi-view matching manner. Specifically, original and reflection support features from different perspectives with the same semantics are learnable fused to obtain the reflection invariance prototype with a stronger category representation ability. Simultaneously, aiming at providing better prior guidance, the Reflection Invariance Prior Mask Generation (RIPMG) module is proposed to integrate prior knowledge from different perspectives. Finally, segmentation predictions from varying views are complementarily merged in the Reflection Invariance Semantic Prediction (RISP) module to yield precise segmentation predictions. Extensive experiments on both PASCAL-5i5^\textit{i}5i and COCO-20i20^\textit{i}20i datasets demonstrate the effectiveness of our approach and show that our method could achieve state-of-the-art performance. Code is available at \url{https://anonymous.4open.science/r/RILFS-A4D1}

View on arXiv
Comments on this paper