64
0

Mentor3AD: Feature Reconstruction-based 3D Anomaly Detection via Multi-modality Mentor Learning

Main:8 Pages
7 Figures
Bibliography:2 Pages
Abstract

Multimodal feature reconstruction is a promising approach for 3D anomaly detection, leveraging the complementary information from dual modalities. We further advance this paradigm by utilizing multi-modal mentor learning, which fuses intermediate features to further distinguish normal from feature differences. To address these challenges, we propose a novel method called Mentor3AD, which utilizes multi-modal mentor learning. By leveraging the shared features of different modalities, Mentor3AD can extract more effective features and guide feature reconstruction, ultimately improving detection performance. Specifically, Mentor3AD includes a Mentor of Fusion Module (MFM) that merges features extracted from RGB and 3D modalities to create a mentor feature. Additionally, we have designed a Mentor of Guidance Module (MGM) to facilitate cross-modal reconstruction, supported by the mentor feature. Lastly, we introduce a Voting Module (VM) to more accurately generate the final anomaly score. Extensive comparative and ablation studies on MVTec 3D-AD and Eyecandies have verified the effectiveness of the proposed method.

View on arXiv
@article{wang2025_2505.21420,
  title={ Mentor3AD: Feature Reconstruction-based 3D Anomaly Detection via Multi-modality Mentor Learning },
  author={ Jinbao Wang and Hanzhe Liang and Can Gao and Chenxi Hu and Jie Zhou and Yunkang Cao and Linlin Shen and Weiming Shen },
  journal={arXiv preprint arXiv:2505.21420},
  year={ 2025 }
}
Comments on this paper