ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.15760
  4. Cited By
Shrinking Bigfoot: Reducing wav2vec 2.0 footprint

Shrinking Bigfoot: Reducing wav2vec 2.0 footprint

29 March 2021
Zilun Peng
Akshay Budhkar
Ilana Tuil
J. Levy
Parinaz Sobhani
Raphael Cohen
J. Nassour
ArXivPDFHTML

Papers citing "Shrinking Bigfoot: Reducing wav2vec 2.0 footprint"

12 / 12 papers shown
Title
DAISY: Data Adaptive Self-Supervised Early Exit for Speech
  Representation Models
DAISY: Data Adaptive Self-Supervised Early Exit for Speech Representation Models
T. Lin
Hung-yi Lee
Hao Tang
58
1
0
08 Jun 2024
AdaKD: Dynamic Knowledge Distillation of ASR models using Adaptive Loss
  Weighting
AdaKD: Dynamic Knowledge Distillation of ASR models using Adaptive Loss Weighting
Shreyan Ganguly
Roshan Nayak
Rakshith Rao
Ujan Deb
AP Prathosh
34
1
0
11 May 2024
On-Device Constrained Self-Supervised Speech Representation Learning for
  Keyword Spotting via Knowledge Distillation
On-Device Constrained Self-Supervised Speech Representation Learning for Keyword Spotting via Knowledge Distillation
Gene-Ping Yang
Yue Gu
Qingming Tang
Dongsu Du
Yuzong Liu
27
5
0
06 Jul 2023
One-Step Knowledge Distillation and Fine-Tuning in Using Large
  Pre-Trained Self-Supervised Learning Models for Speaker Verification
One-Step Knowledge Distillation and Fine-Tuning in Using Large Pre-Trained Self-Supervised Learning Models for Speaker Verification
Ju-Sung Heo
Chan-yeong Lim
Ju-ho Kim
Hyun-Seo Shin
Ha-Jin Yu
31
2
0
27 May 2023
Structured Pruning of Self-Supervised Pre-trained Models for Speech
  Recognition and Understanding
Structured Pruning of Self-Supervised Pre-trained Models for Speech Recognition and Understanding
Yifan Peng
Kwangyoun Kim
Felix Wu
Prashant Sridhar
Shinji Watanabe
37
34
0
27 Feb 2023
Compressing Transformer-based self-supervised models for speech
  processing
Compressing Transformer-based self-supervised models for speech processing
Tzu-Quan Lin
Tsung-Huan Yang
Chun-Yao Chang
Kuang-Ming Chen
Tzu-hsun Feng
Hung-yi Lee
Hao Tang
45
6
0
17 Nov 2022
Efficient Utilization of Large Pre-Trained Models for Low Resource ASR
Efficient Utilization of Large Pre-Trained Models for Low Resource ASR
Peter Vieting
Christoph Luscher
Julian Dierkes
Ralf Schluter
Hermann Ney
41
5
0
26 Oct 2022
RedApt: An Adaptor for wav2vec 2 Encoding \\ Faster and Smaller Speech
  Translation without Quality Compromise
RedApt: An Adaptor for wav2vec 2 Encoding \\ Faster and Smaller Speech Translation without Quality Compromise
Jinming Zhao
Haomiao Yang
Gholamreza Haffari
Ehsan Shareghi
VLM
21
2
0
16 Oct 2022
FitHuBERT: Going Thinner and Deeper for Knowledge Distillation of Speech
  Self-Supervised Learning
FitHuBERT: Going Thinner and Deeper for Knowledge Distillation of Speech Self-Supervised Learning
Yeonghyeon Lee
Kangwook Jang
Jahyun Goo
Youngmoon Jung
Hoi-Rim Kim
36
29
0
01 Jul 2022
Fast Real-time Personalized Speech Enhancement: End-to-End Enhancement
  Network (E3Net) and Knowledge Distillation
Fast Real-time Personalized Speech Enhancement: End-to-End Enhancement Network (E3Net) and Knowledge Distillation
Manthan Thakker
Sefik Emre Eskimez
Takuya Yoshioka
Huaming Wang
24
28
0
02 Apr 2022
LightHuBERT: Lightweight and Configurable Speech Representation Learning
  with Once-for-All Hidden-Unit BERT
LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT
Rui Wang
Qibing Bai
Junyi Ao
Long Zhou
Zhixiang Xiong
Zhihua Wei
Yu Zhang
Tom Ko
Haizhou Li
34
62
0
29 Mar 2022
DistilHuBERT: Speech Representation Learning by Layer-wise Distillation
  of Hidden-unit BERT
DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT
Heng-Jui Chang
Shu-Wen Yang
Hung-yi Lee
SSL
43
165
0
05 Oct 2021
1