ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.02772
  4. Cited By
Attention or Convolution: Transformer Encoders in Audio Language Models
  for Inference Efficiency

Attention or Convolution: Transformer Encoders in Audio Language Models for Inference Efficiency

5 November 2023
Sungho Jeon
Ching-Feng Yeh
Hakan Inan
Wei-Ning Hsu
Rashi Rungta
Yashar Mehdad
Daniel M. Bikel
ArXivPDFHTML

Papers citing "Attention or Convolution: Transformer Encoders in Audio Language Models for Inference Efficiency"

2 / 2 papers shown
Title
BiT: Robustly Binarized Multi-distilled Transformer
BiT: Robustly Binarized Multi-distilled Transformer
Zechun Liu
Barlas Oğuz
Aasish Pappu
Lin Xiao
Scott Yih
Meng Li
Raghuraman Krishnamoorthi
Yashar Mehdad
MQ
43
52
0
25 May 2022
Pushing the Limits of Semi-Supervised Learning for Automatic Speech
  Recognition
Pushing the Limits of Semi-Supervised Learning for Automatic Speech Recognition
Yu Zhang
James Qin
Daniel S. Park
Wei Han
Chung-Cheng Chiu
Ruoming Pang
Quoc V. Le
Yonghui Wu
VLM
SSL
139
308
0
20 Oct 2020
1