Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2307.12612
Cited By
Less is More: Focus Attention for Efficient DETR
24 July 2023
Dehua Zheng
Wenhui Dong
Hailin Hu
Xinghao Chen
Yunhe Wang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Less is More: Focus Attention for Efficient DETR"
8 / 8 papers shown
Title
Semantics Prompting Data-Free Quantization for Low-Bit Vision Transformers
Yunshan Zhong
Yuyao Zhou
Yuxin Zhang
Shen Li
Yong Li
Fei Chao
Zhanpeng Zeng
Rongrong Ji
MQ
89
0
0
31 Dec 2024
Token Pruning using a Lightweight Background Aware Vision Transformer
Sudhakar Sah
Ravish Kumar
Honnesh Rohmetra
Ehsan Saboori
ViT
16
0
0
12 Oct 2024
FastTextSpotter: A High-Efficiency Transformer for Multilingual Scene Text Spotting
Alloy Das
Sanket Biswas
Umapada Pal
Josep Lladós
Saumik Bhattacharya
49
2
0
27 Aug 2024
Prompt-Aware Adapter: Towards Learning Adaptive Visual Tokens for Multimodal Large Language Models
Yue Zhang
Hehe Fan
Yi Yang
41
3
0
24 May 2024
SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-parameterized Batch Normalization
Jialong Guo
Xinghao Chen
Yehui Tang
Yunhe Wang
ViT
47
9
0
19 May 2024
YOLO-TLA: An Efficient and Lightweight Small Object Detection Model based on YOLOv5
Peng Gao
Chun-Lin Ji
Tao Yu
Ruyue Yuan
ObjD
21
34
0
22 Feb 2024
Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention
Xiangcheng Liu
Tianyi Wu
Guodong Guo
ViT
40
26
0
28 Sep 2022
DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR
Shilong Liu
Feng Li
Hao Zhang
X. Yang
Xianbiao Qi
Hang Su
Jun Zhu
Lei Zhang
ViT
138
703
0
28 Jan 2022
1