ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.09040
  4. Cited By
STaR: Distilling Speech Temporal Relation for Lightweight Speech
  Self-Supervised Learning Models
v1v2 (latest)

STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023
14 December 2023
Kangwook Jang
Sungnyun Kim
Hoi-Rim Kim
ArXiv (abs)PDFHTMLGithub

Papers citing "STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models"

2 / 2 papers shown
HuBERT-VIC: Improving Noise-Robust Automatic Speech Recognition of Speech Foundation Model via Variance-Invariance-Covariance Regularization
HuBERT-VIC: Improving Noise-Robust Automatic Speech Recognition of Speech Foundation Model via Variance-Invariance-Covariance Regularization
Hyebin Ahn
Kangwook Jang
Hoirin Kim
180
0
0
17 Aug 2025
Is Smaller Always Faster? Tradeoffs in Compressing Self-Supervised Speech Transformers
Is Smaller Always Faster? Tradeoffs in Compressing Self-Supervised Speech Transformers
Tzu-Quan Lin
Tsung-Huan Yang
Chun-Yao Chang
Kuang-Ming Chen
Tzu-hsun Feng
Hung-yi Lee
Hao Tang
336
6
0
17 Nov 2022
1
Page 1 of 1