ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.16897
66
0

MVREC: A General Few-shot Defect Classification Model Using Multi-View Region-Context

22 December 2024
Shuai Lyu
Fangjian Liao
Zeqi Ma
Rongchen Zhang
Dongmei Mo
W. Wong
ArXivPDFHTML
Abstract

Few-shot defect multi-classification (FSDMC) is an emerging trend in quality control within industrial manufacturing. However, current FSDMC research often lacks generalizability due to its focus on specific datasets. Additionally, defect classification heavily relies on contextual information within images, and existing methods fall short of effectively extracting this information. To address these challenges, we propose a general FSDMC framework called MVREC, which offers two primary advantages: (1) MVREC extracts general features for defect instances by incorporating the pre-trained AlphaCLIP model. (2) It utilizes a region-context framework to enhance defect features by leveraging mask region input and multi-view context augmentation. Furthermore, Few-shot Zip-Adapter(-F) classifiers within the model are introduced to cache the visual features of the support set and perform few-shot classification. We also introduce MVTec-FS, a new FSDMC benchmark based on MVTec AD, which includes 1228 defect images with instance-level mask annotations and 46 defect types. Extensive experiments conducted on MVTec-FS and four additional datasets demonstrate its effectiveness in general defect classification and its ability to incorporate contextual information to improve classification performance. Code:this https URL

View on arXiv
@article{lyu2025_2412.16897,
  title={ MVREC: A General Few-shot Defect Classification Model Using Multi-View Region-Context },
  author={ Shuai Lyu and Rongchen Zhang and Zeqi Ma and Fangjian Liao and Dongmei Mo and Waikeung Wong },
  journal={arXiv preprint arXiv:2412.16897},
  year={ 2025 }
}
Comments on this paper