ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.05704
  4. Cited By
Escaping the Big Data Paradigm with Compact Transformers

Escaping the Big Data Paradigm with Compact Transformers

12 April 2021
Ali Hassani
Steven Walton
Nikhil Shah
Abulikemu Abuduweili
Jiachen Li
Humphrey Shi
ArXivPDFHTML

Papers citing "Escaping the Big Data Paradigm with Compact Transformers"

50 / 215 papers shown
Title
Multi-modal Deep Learning
Multi-modal Deep Learning
Chen Yuhua
MedIm
23
35
0
06 Mar 2024
ARNN: Attentive Recurrent Neural Network for Multi-channel EEG Signals
  to Identify Epileptic Seizures
ARNN: Attentive Recurrent Neural Network for Multi-channel EEG Signals to Identify Epileptic Seizures
S. Rukhsar
Anil Kumar Tiwari
16
2
0
05 Mar 2024
Fourier-basis Functions to Bridge Augmentation Gap: Rethinking Frequency
  Augmentation in Image Classification
Fourier-basis Functions to Bridge Augmentation Gap: Rethinking Frequency Augmentation in Image Classification
Puru Vaish
Shunxin Wang
N. Strisciuglio
30
9
0
04 Mar 2024
Can Transformers Capture Spatial Relations between Objects?
Can Transformers Capture Spatial Relations between Objects?
Chuan Wen
Dinesh Jayaraman
Yang Gao
ViT
14
3
0
01 Mar 2024
Deep Homography Estimation for Visual Place Recognition
Deep Homography Estimation for Visual Place Recognition
Feng Lu
Shuting Dong
Lijun Zhang
Bingxi Liu
Xiangyuan Lan
Dongmei Jiang
Chun Yuan
33
10
0
25 Feb 2024
Towards Cross-Domain Continual Learning
Towards Cross-Domain Continual Learning
Marcus Vinícius de Carvalho
Mahardhika Pratama
Jie M. Zhang
Chua Haoyan
E. Yapp
CLL
36
0
0
19 Feb 2024
Pre-training of Lightweight Vision Transformers on Small Datasets with
  Minimally Scaled Images
Pre-training of Lightweight Vision Transformers on Small Datasets with Minimally Scaled Images
Jen Hong Tan
ViT
11
2
0
06 Feb 2024
Do deep neural networks utilize the weight space efficiently?
Do deep neural networks utilize the weight space efficiently?
Onur Can Koyun
B. U. Toreyin
8
0
0
26 Jan 2024
Cross-Domain Few-Shot Learning via Adaptive Transformer Networks
Cross-Domain Few-Shot Learning via Adaptive Transformer Networks
Naeem Paeedeh
Mahardhika Pratama
M. A. Ma'sum
Wolfgang Mayer
Zehong Cao
Ryszard Kowlczyk
24
9
0
25 Jan 2024
OnDev-LCT: On-Device Lightweight Convolutional Transformers towards federated learning
OnDev-LCT: On-Device Lightweight Convolutional Transformers towards federated learning
Chu Myaet Thwal
Minh N. H. Nguyen
Ye Lin Tun
Seongjin Kim
My T. Thai
Choong Seon Hong
49
5
0
22 Jan 2024
Harnessing Orthogonality to Train Low-Rank Neural Networks
Harnessing Orthogonality to Train Low-Rank Neural Networks
D. Coquelin
Katharina Flügel
Marie Weiel
Nicholas Kiefer
Charlotte Debus
Achim Streit
Markus Goetz
10
1
0
16 Jan 2024
Fusing Echocardiography Images and Medical Records for Continuous
  Patient Stratification
Fusing Echocardiography Images and Medical Records for Continuous Patient Stratification
Nathan Painchaud
P. Courand
Pierre-Marc Jodoin
Nicolas Duchateau
Olivier Bernard
34
2
0
15 Jan 2024
Knee or ROC
Knee or ROC
Veronica Wendt
Byunggu Yu
Caleb Kelly
Junwhan Kim
27
0
0
14 Jan 2024
Spikformer V2: Join the High Accuracy Club on ImageNet with an SNN
  Ticket
Spikformer V2: Join the High Accuracy Club on ImageNet with an SNN Ticket
Zhaokun Zhou
Kaiwei Che
Wei Fang
Keyu Tian
Yuesheng Zhu
Shuicheng Yan
Yonghong Tian
Liuliang Yuan
ViT
37
27
0
04 Jan 2024
Algebraic Positional Encodings
Algebraic Positional Encodings
Konstantinos Kogkalidis
Jean-Philippe Bernardy
Vikas K. Garg
14
1
0
26 Dec 2023
Structured Inverse-Free Natural Gradient: Memory-Efficient &
  Numerically-Stable KFAC
Structured Inverse-Free Natural Gradient: Memory-Efficient & Numerically-Stable KFAC
Wu Lin
Felix Dangel
Runa Eschenhagen
Kirill Neklyudov
Agustinus Kristiadi
Richard E. Turner
Alireza Makhzani
16
3
0
09 Dec 2023
Are Vision Transformers More Data Hungry Than Newborn Visual Systems?
Are Vision Transformers More Data Hungry Than Newborn Visual Systems?
Lalit Pandey
Samantha M. W. Wood
Justin N. Wood
21
11
0
05 Dec 2023
Spiking Neural Networks with Dynamic Time Steps for Vision Transformers
Spiking Neural Networks with Dynamic Time Steps for Vision Transformers
Gourav Datta
Zeyu Liu
Anni Li
P. Beerel
23
1
0
28 Nov 2023
MARformer: An Efficient Metal Artifact Reduction Transformer for Dental
  CBCT Images
MARformer: An Efficient Metal Artifact Reduction Transformer for Dental CBCT Images
Yuxuan Shi
Jun Xu
Dinggang Shen
MedIm
17
0
0
16 Nov 2023
Traffic Sign Recognition Using Local Vision Transformer
Traffic Sign Recognition Using Local Vision Transformer
Ali Farzipour
Omid Nejati Manzari
S. B. Shokouhi
ViT
24
5
0
11 Nov 2023
TinyFormer: Efficient Transformer Design and Deployment on Tiny Devices
TinyFormer: Efficient Transformer Design and Deployment on Tiny Devices
Jianlei Yang
Jiacheng Liao
Fanding Lei
Meichen Liu
Junyi Chen
Lingkun Long
Han Wan
Bei Yu
Weisheng Zhao
MoE
33
2
0
03 Nov 2023
Eureka-Moments in Transformers: Multi-Step Tasks Reveal Softmax Induced
  Optimization Problems
Eureka-Moments in Transformers: Multi-Step Tasks Reveal Softmax Induced Optimization Problems
David T. Hoffmann
Simon Schrodi
Jelena Bratulić
Nadine Behrmann
Volker Fischer
Thomas Brox
27
5
0
19 Oct 2023
AANet: Aggregation and Alignment Network with Semi-hard Positive Sample
  Mining for Hierarchical Place Recognition
AANet: Aggregation and Alignment Network with Semi-hard Positive Sample Mining for Hierarchical Place Recognition
Feng Lu
Lijun Zhang
Shuting Dong
Baifan Chen
Chun Yuan
47
12
0
08 Oct 2023
PriViT: Vision Transformers for Fast Private Inference
PriViT: Vision Transformers for Fast Private Inference
Naren Dhyani
Jianqiao Mo
Minsu Cho
Ameya Joshi
Siddharth Garg
Brandon Reagen
Chinmay Hegde
18
4
0
06 Oct 2023
Asca: less audio data is more insightful
Asca: less audio data is more insightful
Xiang Li
Jing Chen
Chao Li
Hongwu Lv
12
0
0
23 Sep 2023
Toward a Deeper Understanding: RetNet Viewed through Convolution
Toward a Deeper Understanding: RetNet Viewed through Convolution
Chenghao Li
Chaoning Zhang
ViT
29
7
0
11 Sep 2023
DeViT: Decomposing Vision Transformers for Collaborative Inference in
  Edge Devices
DeViT: Decomposing Vision Transformers for Collaborative Inference in Edge Devices
Guanyu Xu
Zhiwei Hao
Yong Luo
Han Hu
J. An
Shiwen Mao
ViT
30
13
0
10 Sep 2023
Exemplar-Free Continual Transformer with Convolutions
Exemplar-Free Continual Transformer with Convolutions
Anurag Roy
Vinay K. Verma
Sravan Voonna
Kripabandhu Ghosh
Saptarshi Ghosh
Abir Das
CLL
BDL
15
10
0
22 Aug 2023
Patch Is Not All You Need
Patch Is Not All You Need
Chang-bo Li
Jie M. Zhang
Yang Wei
Zhilong Ji
Jinfeng Bai
Shiguang Shan
ViT
44
1
0
21 Aug 2023
Improving Depth Gradient Continuity in Transformers: A Comparative Study
  on Monocular Depth Estimation with CNN
Improving Depth Gradient Continuity in Transformers: A Comparative Study on Monocular Depth Estimation with CNN
Jiawei Yao
Tong Wu
Xiaofeng Zhang
ViT
MDE
35
74
0
16 Aug 2023
Attention-free Spikformer: Mixing Spike Sequences with Simple Linear
  Transforms
Attention-free Spikformer: Mixing Spike Sequences with Simple Linear Transforms
Qingyu Wang
Duzhen Zhang
Tielin Zhang
Bo Xu
26
3
0
02 Aug 2023
SwIPE: Efficient and Robust Medical Image Segmentation with Implicit
  Patch Embeddings
SwIPE: Efficient and Robust Medical Image Segmentation with Implicit Patch Embeddings
Yejia Zhang
Pengfei Gu
Nishchal Sapkota
Da Chen
43
7
0
23 Jul 2023
Revisiting Implicit Models: Sparsity Trade-offs Capability in
  Weight-tied Model for Vision Tasks
Revisiting Implicit Models: Sparsity Trade-offs Capability in Weight-tied Model for Vision Tasks
Haobo Song
Soumajit Majumder
Tao R. Lin
VLM
18
0
0
16 Jul 2023
ITA: An Energy-Efficient Attention and Softmax Accelerator for Quantized
  Transformers
ITA: An Energy-Efficient Attention and Softmax Accelerator for Quantized Transformers
Gamze Islamoglu
Moritz Scherer
G. Paulin
Tim Fischer
Victor J. B. Jung
Angelo Garofalo
Luca Benini
MQ
14
11
0
07 Jul 2023
More for Less: Compact Convolutional Transformers Enable Robust Medical
  Image Classification with Limited Data
More for Less: Compact Convolutional Transformers Enable Robust Medical Image Classification with Limited Data
Andrew Gao
MedIm
15
3
0
01 Jul 2023
Triggering Dark Showers with Conditional Dual Auto-Encoders
Triggering Dark Showers with Conditional Dual Auto-Encoders
Luca Anzalone
S. S. Chhibra
B. Maier
N. Chernyavskaya
M. Pierini
AI4CE
24
4
0
22 Jun 2023
Mitigating Communication Costs in Neural Networks: The Role of Dendritic Nonlinearity
Mitigating Communication Costs in Neural Networks: The Role of Dendritic Nonlinearity
Xundong Wu
Pengfei Zhao
Zilin Yu
Lei Ma
K. Yip
Huajin Tang
Gang Pan
Poirazi Panayiota
Tiejun Huang
16
0
0
21 Jun 2023
Lightweight Monocular Depth Estimation via Token-Sharing Transformer
Lightweight Monocular Depth Estimation via Token-Sharing Transformer
Dong-Jae Lee
Jae Young Lee
Hyounguk Shon
Eojindl Yi
Yeong-Hun Park
Sung-Jin Cho
Junmo Kim
ViT
MDE
6
4
0
09 Jun 2023
T-ADAF: Adaptive Data Augmentation Framework for Image Classification
  Network based on Tensor T-product Operator
T-ADAF: Adaptive Data Augmentation Framework for Image Classification Network based on Tensor T-product Operator
F. Han
Yun Miao
Zhao Sun
Yimin Wei
12
6
0
07 Jun 2023
Exploring Simple, High Quality Out-of-Distribution Detection with L2
  Normalization
Exploring Simple, High Quality Out-of-Distribution Detection with L2 Normalization
J. Haas
William Yolland
B. Rabus
OODD
8
0
0
07 Jun 2023
Energy-Based Models for Cross-Modal Localization using Convolutional
  Transformers
Energy-Based Models for Cross-Modal Localization using Convolutional Transformers
Alan Wu
Michael S. Ryoo
21
3
0
06 Jun 2023
Auto-Spikformer: Spikformer Architecture Search
Auto-Spikformer: Spikformer Architecture Search
Kaiwei Che
Zhaokun Zhou
Zhengyu Ma
Wei Fang
Yanqing Chen
Shuaijie Shen
Liuliang Yuan
Yonghong Tian
21
4
0
01 Jun 2023
Caterpillar: A Pure-MLP Architecture with Shifted-Pillars-Concatenation
Caterpillar: A Pure-MLP Architecture with Shifted-Pillars-Concatenation
J. Sun
Xiaoshuang Shi
Zhiyuan Weng
Kaidi Xu
H. Shen
Xiao-lan Zhu
MLLM
22
2
0
28 May 2023
Learning Sequence Descriptor based on Spatio-Temporal Attention for
  Visual Place Recognition
Learning Sequence Descriptor based on Spatio-Temporal Attention for Visual Place Recognition
Junqiao Zhao
Fenglin Zhang
Yingfeng Cai
Geng Tian
Wenjie Mu
Chen Ye
Tiantian Feng
15
4
0
19 May 2023
Mimetic Initialization of Self-Attention Layers
Mimetic Initialization of Self-Attention Layers
Asher Trockman
J. Zico Kolter
23
30
0
16 May 2023
Lightweight Convolution Transformer for Cross-patient Seizure Detection
  in Multi-channel EEG Signals
Lightweight Convolution Transformer for Cross-patient Seizure Detection in Multi-channel EEG Signals
S. Rukhsar
A. Tiwari
24
9
0
07 May 2023
BrainNPT: Pre-training of Transformer networks for brain network
  classification
BrainNPT: Pre-training of Transformer networks for brain network classification
Jinlong Hu
Ya-Lin Huang
Nan Wang
Shoubin Dong
ViT
MedIm
18
6
0
02 May 2023
Vision Conformer: Incorporating Convolutions into Vision Transformer
  Layers
Vision Conformer: Incorporating Convolutions into Vision Transformer Layers
Brian Kenji Iwana
Akihiro Kusuda
ViT
32
2
0
27 Apr 2023
Spikingformer: Spike-driven Residual Learning for Transformer-based
  Spiking Neural Network
Spikingformer: Spike-driven Residual Learning for Transformer-based Spiking Neural Network
Chenlin Zhou
Liutao Yu
Zhaokun Zhou
Zhengyu Ma
Han Zhang
Huihui Zhou
Yonghong Tian
25
59
0
24 Apr 2023
Dilated-UNet: A Fast and Accurate Medical Image Segmentation Approach
  using a Dilated Transformer and U-Net Architecture
Dilated-UNet: A Fast and Accurate Medical Image Segmentation Approach using a Dilated Transformer and U-Net Architecture
Davoud Saadati
Omid Nejati Manzari
S. Mirzakuchaki
ViT
MedIm
16
15
0
22 Apr 2023
Previous
12345
Next