ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.05216
  4. Cited By
AttentionLite: Towards Efficient Self-Attention Models for Vision

AttentionLite: Towards Efficient Self-Attention Models for Vision

21 December 2020
Souvik Kundu
Sairam Sundaresan
ArXivPDFHTML

Papers citing "AttentionLite: Towards Efficient Self-Attention Models for Vision"

2 / 2 papers shown
Title
Sparse Distillation: Speeding Up Text Classification by Using Bigger
  Student Models
Sparse Distillation: Speeding Up Text Classification by Using Bigger Student Models
Qinyuan Ye
Madian Khabsa
M. Lewis
Sinong Wang
Xiang Ren
Aaron Jaech
29
5
0
16 Oct 2021
HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep
  Spiking Neural Networks by Training with Crafted Input Noise
HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep Spiking Neural Networks by Training with Crafted Input Noise
Souvik Kundu
Massoud Pedram
P. Beerel
AAML
8
70
0
06 Oct 2021
1