ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.17394
  4. Cited By
One-Step Knowledge Distillation and Fine-Tuning in Using Large
  Pre-Trained Self-Supervised Learning Models for Speaker Verification

One-Step Knowledge Distillation and Fine-Tuning in Using Large Pre-Trained Self-Supervised Learning Models for Speaker Verification

27 May 2023
Ju-Sung Heo
Chan-yeong Lim
Ju-ho Kim
Hyun-Seo Shin
Ha-Jin Yu
ArXivPDFHTML

Papers citing "One-Step Knowledge Distillation and Fine-Tuning in Using Large Pre-Trained Self-Supervised Learning Models for Speaker Verification"

4 / 4 papers shown
Title
STaR: Distilling Speech Temporal Relation for Lightweight Speech
  Self-Supervised Learning Models
STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models
Kangwook Jang
Sungnyun Kim
Hoi-Rim Kim
26
1
0
14 Dec 2023
Emphasized Non-Target Speaker Knowledge in Knowledge Distillation for
  Automatic Speaker Verification
Emphasized Non-Target Speaker Knowledge in Knowledge Distillation for Automatic Speaker Verification
Duc-Tuan Truong
Ruijie Tao
J. Yip
Kong Aik Lee
Chng Eng Siong
11
6
0
26 Sep 2023
Learning Student-Friendly Teacher Networks for Knowledge Distillation
Learning Student-Friendly Teacher Networks for Knowledge Distillation
D. Park
Moonsu Cha
C. Jeong
Daesin Kim
Bohyung Han
113
99
0
12 Feb 2021
VoxCeleb2: Deep Speaker Recognition
VoxCeleb2: Deep Speaker Recognition
Joon Son Chung
Arsha Nagrani
Andrew Zisserman
214
2,224
0
14 Jun 2018
1