50
0

One-for-More: Continual Diffusion Model for Anomaly Detection

Abstract

With the rise of generative models, there is a growing interest in unifying all tasks within a generative framework. Anomaly detection methods also fall into this scope and utilize diffusion models to generate or reconstruct normal samples when given arbitrary anomaly images. However, our study found that the diffusion model suffers from severe ``faithfulness hallucination'' and ``catastrophic forgetting'', which can't meet the unpredictable pattern increments. To mitigate the above problems, we propose a continual diffusion model that uses gradient projection to achieve stable continual learning. Gradient projection deploys a regularization on the model updating by modifying the gradient towards the direction protecting the learned knowledge. But as a double-edged sword, it also requires huge memory costs brought by the Markov process. Hence, we propose an iterative singular value decomposition method based on the transitive property of linear representation, which consumes tiny memory and incurs almost no performance loss. Finally, considering the risk of ``over-fitting'' to normal images of the diffusion model, we propose an anomaly-masked network to enhance the condition mechanism of the diffusion model. For continual anomaly detection, ours achieves first place in 17/18 settings on MVTec and VisA. Code is available atthis https URL

View on arXiv
@article{li2025_2502.19848,
  title={ One-for-More: Continual Diffusion Model for Anomaly Detection },
  author={ Xiaofan Li and Xin Tan and Zhuo Chen and Zhizhong Zhang and Ruixin Zhang and Rizen Guo and Guanna Jiang and Yulong Chen and Yanyun Qu and Lizhuang Ma and Yuan Xie },
  journal={arXiv preprint arXiv:2502.19848},
  year={ 2025 }
}
Comments on this paper