Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2401.01752
Cited By
FullLoRA-AT: Efficiently Boosting the Robustness of Pretrained Vision Transformers
3 January 2024
Zheng Yuan
Jie M. Zhang
Shiguang Shan
Re-assign community
ArXiv
PDF
HTML
Papers citing
"FullLoRA-AT: Efficiently Boosting the Robustness of Pretrained Vision Transformers"
6 / 6 papers shown
Title
PSLT: A Light-weight Vision Transformer with Ladder Self-Attention and Progressive Shift
Gaojie Wu
Weishi Zheng
Yutong Lu
Q. Tian
ViT
36
13
0
07 Apr 2023
Diffusion Models for Adversarial Purification
Weili Nie
Brandon Guo
Yujia Huang
Chaowei Xiao
Arash Vahdat
Anima Anandkumar
WIGM
187
410
0
16 May 2022
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
263
3,538
0
24 Feb 2021
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
253
3,102
0
04 Nov 2016
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
L. V. D. van der Maaten
Kilian Q. Weinberger
PINN
3DV
244
35,884
0
25 Aug 2016
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
279
39,083
0
01 Sep 2014
1