ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13784
25
0

Transfer Learning from Visual Speech Recognition to Mouthing Recognition in German Sign Language

20 May 2025
Dinh Nam Pham
Eleftherios Avramidis
    SLR
ArXivPDFHTML
Abstract

Sign Language Recognition (SLR) systems primarily focus on manual gestures, but non-manual features such as mouth movements, specifically mouthing, provide valuable linguistic information. This work directly classifies mouthing instances to their corresponding words in the spoken language while exploring the potential of transfer learning from Visual Speech Recognition (VSR) to mouthing recognition in German Sign Language. We leverage three VSR datasets: one in English, one in German with unrelated words and one in German containing the same target words as the mouthing dataset, to investigate the impact of task similarity in this setting. Our results demonstrate that multi-task learning improves both mouthing recognition and VSR accuracy as well as model robustness, suggesting that mouthing recognition should be treated as a distinct but related task to VSR. This research contributes to the field of SLR by proposing knowledge transfer from VSR to SLR datasets with limited mouthing annotations.

View on arXiv
@article{pham2025_2505.13784,
  title={ Transfer Learning from Visual Speech Recognition to Mouthing Recognition in German Sign Language },
  author={ Dinh Nam Pham and Eleftherios Avramidis },
  journal={arXiv preprint arXiv:2505.13784},
  year={ 2025 }
}
Comments on this paper