Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.03265
Cited By
Understanding Self-Attention of Self-Supervised Audio Transformers
5 June 2020
Shu-Wen Yang
Andy T. Liu
Hung-yi Lee
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Understanding Self-Attention of Self-Supervised Audio Transformers"
8 / 8 papers shown
Title
Efficient Training of Self-Supervised Speech Foundation Models on a Compute Budget
Andy T. Liu
Yi-Cheng Lin
Haibin Wu
Stefan Winkler
Hung-yi Lee
31
1
0
09 Sep 2024
Knowledge Distillation from Non-streaming to Streaming ASR Encoder using Auxiliary Non-streaming Layer
Kyuhong Shim
Jinkyu Lee
Simyoung Chang
Kyuwoong Hwang
32
2
0
31 Aug 2023
Exploring Attention Map Reuse for Efficient Transformer Neural Networks
Kyuhong Shim
Jungwook Choi
Wonyong Sung
ViT
17
3
0
29 Jan 2023
Self-Supervised Speech Representation Learning: A Review
Abdel-rahman Mohamed
Hung-yi Lee
Lasse Borgholt
Jakob Drachmann Havtorn
Joakim Edin
...
Shang-Wen Li
Karen Livescu
Lars Maaløe
Tara N. Sainath
Shinji Watanabe
SSL
AI4TS
126
349
0
21 May 2022
Similarity and Content-based Phonetic Self Attention for Speech Recognition
Kyuhong Shim
Wonyong Sung
10
7
0
19 Mar 2022
Audio Self-supervised Learning: A Survey
Shuo Liu
Adria Mallol-Ragolta
Emilia Parada-Cabeleiro
Kun Qian
Xingshuo Jing
Alexander Kathan
Bin Hu
Bjoern W. Schuller
SSL
26
106
0
02 Mar 2022
Don't speak too fast: The impact of data bias on self-supervised speech models
Yen Meng
Yi-Hui Chou
Andy T. Liu
Hung-yi Lee
34
24
0
15 Oct 2021
TERA: Self-Supervised Learning of Transformer Encoder Representation for Speech
Andy T. Liu
Shang-Wen Li
Hung-yi Lee
SSL
48
356
0
12 Jul 2020
1