ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.11934
  4. Cited By
On-demand compute reduction with stochastic wav2vec 2.0

On-demand compute reduction with stochastic wav2vec 2.0

25 April 2022
Apoorv Vyas
Wei-Ning Hsu
Michael Auli
Alexei Baevski
ArXivPDFHTML

Papers citing "On-demand compute reduction with stochastic wav2vec 2.0"

10 / 10 papers shown
Title
Efficient Training of Self-Supervised Speech Foundation Models on a Compute Budget
Efficient Training of Self-Supervised Speech Foundation Models on a Compute Budget
Andy T. Liu
Yi-Cheng Lin
Haibin Wu
Stefan Winkler
Hung-yi Lee
31
1
0
09 Sep 2024
Sustainable self-supervised learning for speech representations
Sustainable self-supervised learning for speech representations
Luis Lugo
Valentin Vielzeuf
29
2
0
11 Jun 2024
Efficiency-oriented approaches for self-supervised speech representation
  learning
Efficiency-oriented approaches for self-supervised speech representation learning
Luis Lugo
Valentin Vielzeuf
SSL
19
1
0
18 Dec 2023
Attention or Convolution: Transformer Encoders in Audio Language Models
  for Inference Efficiency
Attention or Convolution: Transformer Encoders in Audio Language Models for Inference Efficiency
Sungho Jeon
Ching-Feng Yeh
Hakan Inan
Wei-Ning Hsu
Rashi Rungta
Yashar Mehdad
Daniel M. Bikel
23
0
0
05 Nov 2023
Comparative Analysis of the wav2vec 2.0 Feature Extractor
Comparative Analysis of the wav2vec 2.0 Feature Extractor
Peter Vieting
Ralf Schluter
Hermann Ney
20
2
0
08 Aug 2023
Accelerating Transducers through Adjacent Token Merging
Accelerating Transducers through Adjacent Token Merging
Yuang Li
Yu-Huan Wu
Jinyu Li
Shujie Liu
17
4
0
28 Jun 2023
Adapting Multilingual Speech Representation Model for a New,
  Underresourced Language through Multilingual Fine-tuning and Continued
  Pretraining
Adapting Multilingual Speech Representation Model for a New, Underresourced Language through Multilingual Fine-tuning and Continued Pretraining
Karol Nowakowski
M. Ptaszynski
Kyoko Murasaki
Jagna Nieuwazny
15
23
0
18 Jan 2023
Efficient Self-supervised Learning with Contextualized Target
  Representations for Vision, Speech and Language
Efficient Self-supervised Learning with Contextualized Target Representations for Vision, Speech and Language
Alexei Baevski
Arun Babu
Wei-Ning Hsu
Michael Auli
VLM
SSL
32
91
0
14 Dec 2022
Once-for-All Sequence Compression for Self-Supervised Speech Models
Once-for-All Sequence Compression for Self-Supervised Speech Models
Hsuan-Jui Chen
Yen Meng
Hung-yi Lee
14
4
0
04 Nov 2022
On Compressing Sequences for Self-Supervised Speech Models
On Compressing Sequences for Self-Supervised Speech Models
Yen Meng
Hsuan-Jui Chen
Jiatong Shi
Shinji Watanabe
Paola García
Hung-yi Lee
Hao Tang
SSL
8
14
0
13 Oct 2022
1