ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.04906
  4. Cited By
On the Usefulness of Self-Attention for Automatic Speech Recognition
  with Transformers

On the Usefulness of Self-Attention for Automatic Speech Recognition with Transformers

8 November 2020
Shucong Zhang
Erfan Loweimi
P. Bell
Steve Renals
ArXivPDFHTML

Papers citing "On the Usefulness of Self-Attention for Automatic Speech Recognition with Transformers"

17 / 17 papers shown
Title
How Redundant Is the Transformer Stack in Speech Representation Models?
How Redundant Is the Transformer Stack in Speech Representation Models?
Teresa Dorszewski
Albert Kjøller Jacobsen
Lenka Tětková
Lars Kai Hansen
107
0
0
20 Jan 2025
Convexity-based Pruning of Speech Representation Models
Convexity-based Pruning of Speech Representation Models
Teresa Dorszewski
Lenka Tětková
Lars Kai Hansen
25
2
0
16 Aug 2024
Linear-Complexity Self-Supervised Learning for Speech Processing
Linear-Complexity Self-Supervised Learning for Speech Processing
Shucong Zhang
Titouan Parcollet
Rogier van Dalen
Sourav Bhattacharya
41
1
0
18 Jul 2024
Multi-Convformer: Extending Conformer with Multiple Convolution Kernels
Multi-Convformer: Extending Conformer with Multiple Convolution Kernels
Darshan Prabhu
Yifan Peng
P. Jyothi
Shinji Watanabe
39
0
0
04 Jul 2024
EfficientASR: Speech Recognition Network Compression via Attention
  Redundancy and Chunk-Level FFN Optimization
EfficientASR: Speech Recognition Network Compression via Attention Redundancy and Chunk-Level FFN Optimization
Jianzong Wang
Ziqi Liang
Xulong Zhang
Ning Cheng
Jing Xiao
38
0
0
30 Apr 2024
Automatic Speech Recognition using Advanced Deep Learning Approaches: A
  survey
Automatic Speech Recognition using Advanced Deep Learning Approaches: A survey
Hamza Kheddar
Mustapha Hemis
Yassine Himeur
OffRL
40
59
0
02 Mar 2024
SpeechAlign: a Framework for Speech Translation Alignment Evaluation
SpeechAlign: a Framework for Speech Translation Alignment Evaluation
Belen Alastruey
Aleix Sant
Gerard I. Gállego
David Dale
Marta R. Costa-jussá
AuLLM
25
3
0
20 Sep 2023
SummaryMixing: A Linear-Complexity Alternative to Self-Attention for
  Speech Recognition and Understanding
SummaryMixing: A Linear-Complexity Alternative to Self-Attention for Speech Recognition and Understanding
Titouan Parcollet
Rogier van Dalen
Shucong Zhang
S. Bhattacharya
26
6
0
12 Jul 2023
Quantization-Aware and Tensor-Compressed Training of Transformers for
  Natural Language Understanding
Quantization-Aware and Tensor-Compressed Training of Transformers for Natural Language Understanding
Ziao Yang
Samridhi Choudhary
Siegfried Kunzmann
Zheng-Wei Zhang
MQ
14
2
0
01 Jun 2023
DPHuBERT: Joint Distillation and Pruning of Self-Supervised Speech
  Models
DPHuBERT: Joint Distillation and Pruning of Self-Supervised Speech Models
Yifan Peng
Yui Sudo
Muhammad Shakeel
Shinji Watanabe
24
37
0
28 May 2023
Structured Pruning of Self-Supervised Pre-trained Models for Speech
  Recognition and Understanding
Structured Pruning of Self-Supervised Pre-trained Models for Speech Recognition and Understanding
Yifan Peng
Kwangyoun Kim
Felix Wu
Prashant Sridhar
Shinji Watanabe
24
34
0
27 Feb 2023
SegAugment: Maximizing the Utility of Speech Translation Data with
  Segmentation-based Augmentations
SegAugment: Maximizing the Utility of Speech Translation Data with Segmentation-based Augmentations
Ioannis Tsiamas
José A. R. Fonollosa
Marta R. Costa-jussá
41
6
0
19 Dec 2022
Compressing Transformer-based self-supervised models for speech
  processing
Compressing Transformer-based self-supervised models for speech processing
Tzu-Quan Lin
Tsung-Huan Yang
Chun-Yao Chang
Kuang-Ming Chen
Tzu-hsun Feng
Hung-yi Lee
Hao Tang
40
6
0
17 Nov 2022
Branchformer: Parallel MLP-Attention Architectures to Capture Local and
  Global Context for Speech Recognition and Understanding
Branchformer: Parallel MLP-Attention Architectures to Capture Local and Global Context for Speech Recognition and Understanding
Yifan Peng
Siddharth Dalmia
Ian Lane
Shinji Watanabe
30
143
0
06 Jul 2022
Squeezeformer: An Efficient Transformer for Automatic Speech Recognition
Squeezeformer: An Efficient Transformer for Automatic Speech Recognition
Sehoon Kim
A. Gholami
Albert Eaton Shaw
Nicholas Lee
K. Mangalam
Jitendra Malik
Michael W. Mahoney
Kurt Keutzer
32
99
0
02 Jun 2022
On the Locality of Attention in Direct Speech Translation
On the Locality of Attention in Direct Speech Translation
Belen Alastruey
Javier Ferrando
Gerard I. Gállego
Marta R. Costa-jussá
8
7
0
19 Apr 2022
Similarity and Content-based Phonetic Self Attention for Speech
  Recognition
Similarity and Content-based Phonetic Self Attention for Speech Recognition
Kyuhong Shim
Wonyong Sung
15
7
0
19 Mar 2022
1