ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.10476
22
1

Approximate Nullspace Augmented Finetuning for Robust Vision Transformers

15 March 2024
Haoyang Liu
Aditya Singh
Yijiang Li
Haohan Wang
    AAML
    ViT
ArXivPDFHTML
Abstract

Enhancing the robustness of deep learning models, particularly in the realm of vision transformers (ViTs), is crucial for their real-world deployment. In this work, we provide a finetuning approach to enhance the robustness of vision transformers inspired by the concept of nullspace from linear algebra. Our investigation centers on whether a vision transformer can exhibit resilience to input variations akin to the nullspace property in linear mappings, which would imply that perturbations sampled from this nullspace do not influence the model's output when added to the input. We start from the observation that many existing ViTs satisfy this property because their patch embedding layer has a non-trivial nullspace. Then, we extend the notion of nullspace to nonlinear settings and demonstrate that it is possible to synthesize approximate nullspace elements for ViT's encoder blocks through optimization. Finally, we propose a finetuning strategy for ViTs wherein we augment the training data with synthesized approximate nullspace noise. We find that our finetuning approach significantly improves the models' robustness to both adversarial and natural image perturbations.\footnote{Code is available at:this https URL.

View on arXiv
@article{liu2025_2403.10476,
  title={ Approximate Nullspace Augmented Finetuning for Robust Vision Transformers },
  author={ Haoyang Liu and Aditya Singh and Yijiang Li and Haohan Wang },
  journal={arXiv preprint arXiv:2403.10476},
  year={ 2025 }
}
Comments on this paper