ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.15428
  4. Cited By
Explanation on Pretraining Bias of Finetuned Vision Transformer

Explanation on Pretraining Bias of Finetuned Vision Transformer

18 November 2022
Bumjin Park
Jaesik Choi
    ViT
ArXivPDFHTML

Papers citing "Explanation on Pretraining Bias of Finetuned Vision Transformer"

4 / 4 papers shown
Title
Robustifying Point Cloud Networks by Refocusing
Robustifying Point Cloud Networks by Refocusing
Meir Yossef Levi
Guy Gilboa
3DPC
27
4
0
10 Aug 2023
Do Transformer Models Show Similar Attention Patterns to Task-Specific
  Human Gaze?
Do Transformer Models Show Similar Attention Patterns to Task-Specific Human Gaze?
Stephanie Brandl
Oliver Eberle
Jonas Pilot
Anders Søgaard
65
33
0
25 Apr 2022
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
303
5,761
0
29 Apr 2021
Transformers in Vision: A Survey
Transformers in Vision: A Survey
Salman Khan
Muzammal Naseer
Munawar Hayat
Syed Waqas Zamir
F. Khan
M. Shah
ViT
225
2,428
0
04 Jan 2021
1