ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.00155
40
7

Incremental Layer-wise Self-Supervised Learning for Efficient Speech Domain Adaptation On Device

1 October 2021
Zhouyuan Huo
Dong-Gyo Hwang
K. Sim
Shefali Garg
Ananya Misra
Nikhil Siddhartha
Trevor Strohman
Franccoise Beaufays
ArXivPDFHTML
Abstract

Streaming end-to-end speech recognition models have been widely applied to mobile devices and show significant improvement in efficiency. These models are typically trained on the server using transcribed speech data. However, the server data distribution can be very different from the data distribution on user devices, which could affect the model performance. There are two main challenges for on device training, limited reliable labels and limited training memory. While self-supervised learning algorithms can mitigate the mismatch between domains using unlabeled data, they are not applicable on mobile devices directly because of the memory constraint. In this paper, we propose an incremental layer-wise self-supervised learning algorithm for efficient speech domain adaptation on mobile devices, in which only one layer is updated at a time. Extensive experimental results demonstrate that the proposed algorithm obtains a Word Error Rate (WER) on the target domain 24.2%24.2\%24.2% better than supervised baseline and costs 89.7%89.7\%89.7% less training memory than the end-to-end self-supervised learning algorithm.

View on arXiv
Comments on this paper