ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.06432
34
0

D-Feat Occlusions: Diffusion Features for Robustness to Partial Visual Occlusions in Object Recognition

8 April 2025
Rupayan Mallick
Sibo Dong
Nataniel Ruiz
Sarah Adel Bargal
    DiffM
ArXivPDFHTML
Abstract

Applications of diffusion models for visual tasks have been quite noteworthy. This paper targets making classification models more robust to occlusions for the task of object recognition by proposing a pipeline that utilizes a frozen diffusion model. Diffusion features have demonstrated success in image generation and image completion while understanding image context. Occlusion can be posed as an image completion problem by deeming the pixels of the occluder to be `missing.' We hypothesize that such features can help hallucinate object visual features behind occluding objects, and hence we propose using them to enable models to become more occlusion robust. We design experiments to include input-based augmentations as well as feature-based augmentations. Input-based augmentations involve finetuning on images where the occluder pixels are inpainted, and feature-based augmentations involve augmenting classification features with intermediate diffusion features. We demonstrate that our proposed use of diffusion-based features results in models that are more robust to partial object occlusions for both Transformers and ConvNets on ImageNet with simulated occlusions. We also propose a dataset that encompasses real-world occlusions and demonstrate that our method is more robust to partial object occlusions.

View on arXiv
@article{mallick2025_2504.06432,
  title={ D-Feat Occlusions: Diffusion Features for Robustness to Partial Visual Occlusions in Object Recognition },
  author={ Rupayan Mallick and Sibo Dong and Nataniel Ruiz and Sarah Adel Bargal },
  journal={arXiv preprint arXiv:2504.06432},
  year={ 2025 }
}
Comments on this paper