403
v1v2v3v4 (latest)

D3\mathrm{D}^\mathrm{3}-Predictor: Noise-Free Deterministic Diffusion for Dense Prediction

Changliang Xia
Chengyou Jia
Minnan Luo
Zhuohang Dang
Xin Shen
Bowen Ping
Main:8 Pages
14 Figures
Bibliography:4 Pages
31 Tables
Appendix:9 Pages
Abstract

Although diffusion models with strong visual priors have emerged as powerful dense prediction backbones, they overlook a core limitation: the stochastic noise at the core of diffusion sampling is inherently misaligned with dense prediction that requires a deterministic mapping from image to geometry. In this paper, we show that this stochastic noise corrupts fine-grained spatial cues and pushes the model toward timestep-specific noise objectives, consequently destroying meaningful geometric structure mappings. To address this, we introduce D3\mathrm{D}^\mathrm{3}-Predictor, a noise-free deterministic diffusion-based dense prediction model built by reformulating a pretrained diffusion model without stochasticity noise. Instead of relying on noisy inputs to leverage diffusion priors, D3\mathrm{D}^\mathrm{3}-Predictor views the pretrained diffusion network as an ensemble of timestep-dependent visual experts and self-supervisedly aggregates their heterogeneous priors into a single, clean, and complete geometric prior. Meanwhile, we utilize task-specific supervision to seamlessly adapt this noise-free prior to dense prediction tasks. Extensive experiments on various dense prediction tasks demonstrate that D3\mathrm{D}^\mathrm{3}-Predictor achieves competitive or state-of-the-art performance in diverse scenarios. In addition, it requires less than half the training data previously used and efficiently performs inference in a single step. Our code, data, and checkpoints are publicly available atthis https URL.

View on arXiv
Comments on this paper